title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Ollama modelfies (cabybara 34b) | 1 | [removed] | 2023-12-31T13:53:55 | https://www.reddit.com/r/LocalLLaMA/comments/18v887x/ollama_modelfies_cabybara_34b/ | Wonderful-Eye-71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v887x | false | null | t3_18v887x | /r/LocalLLaMA/comments/18v887x/ollama_modelfies_cabybara_34b/ | false | false | self | 1 | null |
Finetuning code generation model on our own repositories | 4 | Hello,
I'm working on a code generation system for my personal usage. I've had some success with a RAG system using langchain to add similar snippets as context, concatenate them and pass as prompt to a code Llama endpoint. My next step would be to finetune the model on my own code base.
I've seen several tutorials with axolotl on a text-to-SQL generation dataset, or on some prompt answering dataset, but I haven't seen anything like finetuning the model on multiple user-owned repos.
This surprises as I would have thought this was the next logical step to improve accuracy. Am I mistaken ? Is the knowledge added by finetuning already present in the context ? Is the performance gain not worth the hassle ?
Did somebody already tried this and has some feedback ?
Thanks a lot
​
| 2023-12-31T13:41:39 | https://www.reddit.com/r/LocalLLaMA/comments/18v80cn/finetuning_code_generation_model_on_our_own/ | Wats0ns | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v80cn | false | null | t3_18v80cn | /r/LocalLLaMA/comments/18v80cn/finetuning_code_generation_model_on_our_own/ | false | false | self | 4 | null |
Possible to use new GPU with old Xeon? | 3 | Hello guys, thanks to tutorials I've been able to run small models on my laptop CPU, but it's rather shitty experience so I would like to get into GPU processing.
Only desktop I have is based on 6core X58 Xeon with 24GB triple channel RAM, still good enough to fulfill the home Linux server role. So I was wondering, would it be possible to just buy used Nvidia GPU with like 16GB VRAM and run the larger models? I understand that having only PCIe gen 2 would be a limiting factor, but once the model is loaded in VRAM it shouldn't be that big of a deal, right? Or is there some other issue I don't see? 🙏 | 2023-12-31T13:13:13 | https://www.reddit.com/r/LocalLLaMA/comments/18v7ich/possible_to_use_new_gpu_with_old_xeon/ | EnthusiasmNo4596 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v7ich | false | null | t3_18v7ich | /r/LocalLLaMA/comments/18v7ich/possible_to_use_new_gpu_with_old_xeon/ | false | false | self | 3 | null |
how to uncensor a llama model? | 1 | [removed] | 2023-12-31T13:11:07 | https://www.reddit.com/r/LocalLLaMA/comments/18v7h33/how_to_uncensor_a_llama_model/ | bharathdp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v7h33 | false | null | t3_18v7h33 | /r/LocalLLaMA/comments/18v7h33/how_to_uncensor_a_llama_model/ | false | false | self | 1 | null |
how to uncensor llama model? | 1 | [removed] | 2023-12-31T13:10:29 | https://www.reddit.com/r/LocalLLaMA/comments/18v7go8/how_to_uncensor_llama_model/ | bharathdp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v7go8 | false | null | t3_18v7go8 | /r/LocalLLaMA/comments/18v7go8/how_to_uncensor_llama_model/ | false | false | self | 1 | null |
Problem downloading LLaMa 2 13B chat-hf model (the model is divided in 3 files) | 2 | I am about to embark on experimenting with "RAG on Windows using TensorRT-LLM and LlamaIndex".
Since I have an RTX 4070, it is written in Nvidia's instructions that **I need to build the TRT Engine based on LLaMa 2 13B chat-hf and LLaMa 2 13B AWQ int4**.
I have already obtained access to the HF model.
Nvidia says, of course, that **I have to download the LLaMa 2 13B chat-hf model (**[this is the link](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf/tree/main)**)**, but on the HuggingFace page the model is divided into **three .safetensors files and three .bin files referring to the pytorch version**.
https://preview.redd.it/h0cjk8ur5m9c1.png?width=1584&format=png&auto=webp&s=12f0888949a7d20694d63a28f72121b079bec4ef
What should I do about this?
How do I "download" the LLaMa 2 13B chat-hf model as it is indicated by Nvidia?
Thank you. | 2023-12-31T11:15:00 | https://www.reddit.com/r/LocalLLaMA/comments/18v5qek/problem_downloading_llama_2_13b_chathf_model_the/ | ilgrillo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v5qek | false | null | t3_18v5qek | /r/LocalLLaMA/comments/18v5qek/problem_downloading_llama_2_13b_chathf_model_the/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'W6KT1NvX73eK0guAxFXt8clvCch0F6ARLE3XCrVCIq0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ay-2SGpyy3hf3Y4APvGyb2V9IGaHNfonKne65VUMlwE.jpg?width=108&crop=smart&auto=webp&s=840ae1f7229bca2e7b1ce3fdc093d8aab22714d5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ay-2SGpyy3hf3Y4APvGyb2V9IGaHNfonKne65VUMlwE.jpg?width=216&crop=smart&auto=webp&s=2a12b09690fa3127480d5c8216a14c95ce85c970', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ay-2SGpyy3hf3Y4APvGyb2V9IGaHNfonKne65VUMlwE.jpg?width=320&crop=smart&auto=webp&s=d69cb1a6d5466e76a613f8a5cefced6fcef37472', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ay-2SGpyy3hf3Y4APvGyb2V9IGaHNfonKne65VUMlwE.jpg?width=640&crop=smart&auto=webp&s=45e3060e309467811e9fc98cd1ccf114afcf5f89', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ay-2SGpyy3hf3Y4APvGyb2V9IGaHNfonKne65VUMlwE.jpg?width=960&crop=smart&auto=webp&s=3347be03ab47351fe1ac6ff3bc4b2fb4584566a9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ay-2SGpyy3hf3Y4APvGyb2V9IGaHNfonKne65VUMlwE.jpg?width=1080&crop=smart&auto=webp&s=5688a87ecb5e66a746ba3d50486449cfec8f5e9c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ay-2SGpyy3hf3Y4APvGyb2V9IGaHNfonKne65VUMlwE.jpg?auto=webp&s=ac2ab2075a555f684cca345df12738266aa81581', 'width': 1200}, 'variants': {}}]} | |
Are there any chat demos with >100 tps inference? | 2 | I tried using koboldai lite but it was awfully slow
can anyone share a link to online demo that shows blazingly fast chat demo?
thank you! | 2023-12-31T11:00:36 | https://www.reddit.com/r/LocalLLaMA/comments/18v5imx/are_there_any_chat_demos_with_100_tps_inference/ | m0dE | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v5imx | false | null | t3_18v5imx | /r/LocalLLaMA/comments/18v5imx/are_there_any_chat_demos_with_100_tps_inference/ | false | false | self | 2 | null |
CrewAI agent framework with local models | 16 | This is great news for everyone who wants to develop agentic software. After a lot of failure and disappointments with running Autogen with local models, I tried the rising star of agent frameworks, CrewAI. It is a multi-agent framework based on LangChain and utilities LangChain's recently added support for Ollama's JSON mode for reliable function calling. The developer treats local models as first class citizens.
I tried the [stock analysis](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis) example code where the agents use a wide range of tools including google searches and subsequently summarizing the top results, storing them in a vector database and generating analysis based on the findings. I tried it with Mistral 7B instruct 0.2 via Ollama on my MacBook Pro M1 16 GB laptop. It took the three agents 15-20 min to perform all the research, RAG, and analysis to come up with a reasonable report. Very cool. | 2023-12-31T10:27:23 | https://github.com/joaomdmoura/crewAI | krazzmann | github.com | 1970-01-01T00:00:00 | 0 | {} | 18v527r | false | null | t3_18v527r | /r/LocalLLaMA/comments/18v527r/crewai_agent_framework_with_local_models/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'MdmEUFo7Uutc6MPkqRFbgIe43P1RX6FB7KKBeriIr3s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ITsX1CibUOyj0ofOHa76kTYT6htZ_TkVy94_bRbsIa4.jpg?width=108&crop=smart&auto=webp&s=b329eccfdf7bdbc44193c6a5c43bbcc28083d835', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ITsX1CibUOyj0ofOHa76kTYT6htZ_TkVy94_bRbsIa4.jpg?width=216&crop=smart&auto=webp&s=91422b00b9cf0fb9d93a1da6a0b950edddcc4705', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ITsX1CibUOyj0ofOHa76kTYT6htZ_TkVy94_bRbsIa4.jpg?width=320&crop=smart&auto=webp&s=6d1516e3bbe1d58daa3806d99d24c37f23ba1a41', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ITsX1CibUOyj0ofOHa76kTYT6htZ_TkVy94_bRbsIa4.jpg?width=640&crop=smart&auto=webp&s=5c39e1b8525960b52df0b6cb00d781b3fe4764c5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ITsX1CibUOyj0ofOHa76kTYT6htZ_TkVy94_bRbsIa4.jpg?width=960&crop=smart&auto=webp&s=d18a4d2130c6246ec05e721f3b5282d4f50c1559', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ITsX1CibUOyj0ofOHa76kTYT6htZ_TkVy94_bRbsIa4.jpg?width=1080&crop=smart&auto=webp&s=ed3d9e4060c1ca3eb299c6980f91abebcabfbf93', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ITsX1CibUOyj0ofOHa76kTYT6htZ_TkVy94_bRbsIa4.jpg?auto=webp&s=38896b92a7099202428b02a25760b0034a2454a5', 'width': 1200}, 'variants': {}}]} | |
Standalone end-to-end RAG pipeline with Rust | 1 | [removed] | 2023-12-31T10:25:44 | https://www.reddit.com/r/LocalLLaMA/comments/18v51df/standalone_endtoend_rag_pipeline_with_rust/ | supiri_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v51df | false | null | t3_18v51df | /r/LocalLLaMA/comments/18v51df/standalone_endtoend_rag_pipeline_with_rust/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'uGtH_26ZQJKOoQM1iM0pNCXA6qh8t0thA3YBSDmylbo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L8wy5gH-hv8SC0K2prEgBl0k7dU8YOQZ1n45rp_eqv0.jpg?width=108&crop=smart&auto=webp&s=e048bb432d67913b5b13b8bdeb72d57cec7c25dd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/L8wy5gH-hv8SC0K2prEgBl0k7dU8YOQZ1n45rp_eqv0.jpg?width=216&crop=smart&auto=webp&s=258c8c63b4cdb4b45a013445a025b5479158dd01', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/L8wy5gH-hv8SC0K2prEgBl0k7dU8YOQZ1n45rp_eqv0.jpg?width=320&crop=smart&auto=webp&s=d19e7eb69917b4a96f136eddd7046484a4371c27', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/L8wy5gH-hv8SC0K2prEgBl0k7dU8YOQZ1n45rp_eqv0.jpg?width=640&crop=smart&auto=webp&s=8743de5cbc1dfc6be608b384f3dad5cc0d18a4a1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/L8wy5gH-hv8SC0K2prEgBl0k7dU8YOQZ1n45rp_eqv0.jpg?width=960&crop=smart&auto=webp&s=5c12f5920e85d3be2e29174f0c3dbb75a5d0c8a2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/L8wy5gH-hv8SC0K2prEgBl0k7dU8YOQZ1n45rp_eqv0.jpg?width=1080&crop=smart&auto=webp&s=bf9501b538d9248e967b9c2d6febb4cee55bf875', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/L8wy5gH-hv8SC0K2prEgBl0k7dU8YOQZ1n45rp_eqv0.jpg?auto=webp&s=66c5f3344612a5ae422266f76b6d6e4ef1326046', 'width': 1200}, 'variants': {}}]} |
This is so Deep (Mistral) | 293 | 2023-12-31T09:38:09 | Supersonic97 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18v4crh | false | null | t3_18v4crh | /r/LocalLLaMA/comments/18v4crh/this_is_so_deep_mistral/ | false | false | 293 | {'enabled': True, 'images': [{'id': 'wjkohqzwZlcwQD1DYx82Iw1w3hMpBdynuJT9_C_TCN8', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/8ybz8qabol9c1.png?width=108&crop=smart&auto=webp&s=013e4bc0549794f9ccaa1670ae7f022238a2e907', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/8ybz8qabol9c1.png?width=216&crop=smart&auto=webp&s=af651ce73085ca1f01356b132c2f018a50182d6a', 'width': 216}, {'height': 271, 'url': 'https://preview.redd.it/8ybz8qabol9c1.png?width=320&crop=smart&auto=webp&s=69583bdbad867cd86a244080d288482c3a4d0dac', 'width': 320}], 'source': {'height': 538, 'url': 'https://preview.redd.it/8ybz8qabol9c1.png?auto=webp&s=f1280e6df2a2c4b3c5fd4fc4107deac9e6d9be21', 'width': 635}, 'variants': {}}]} | |||
deploying a finetuned open AI model is costly!! | 1 | recently i finetuned open ai's gpt 3.5 turbo on azure ai studio.
and bruh !!! to my surprise it costed us 3044$ to hosting that model for 20 days :)
to which i hardly made 10 requests.
Writing hardcore prompts are much better then finetuning uf............
can we just use these models without deploying any thought??
[can everyone stop normalizing finetuning ?? its very costly](https://preview.redd.it/lgqambi8gl9c1.jpg?width=1440&format=pjpg&auto=webp&s=7ac3a8e598b87e13544e6be8c34696357c15ec87) | 2023-12-31T08:57:24 | https://www.reddit.com/r/LocalLLaMA/comments/18v3rcm/deploying_a_finetuned_open_ai_model_is_costly/ | GlitteringAdvisor530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v3rcm | false | null | t3_18v3rcm | /r/LocalLLaMA/comments/18v3rcm/deploying_a_finetuned_open_ai_model_is_costly/ | false | false | 1 | null | |
llama2.c running on galixy watch 4 (tiny 44m model) | 183 | 2023-12-31T08:45:36 | https://v.redd.it/gsiod24gfl9c1 | esharp007 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18v3l4a | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/gsiod24gfl9c1/DASHPlaylist.mpd?a=1706604349%2COGNmOTFlYzNmOTA2YWYyODY5ZDU2MzZmMmViYmY3YjBmM2IxYjAwNzA1Zjg5MjUzMmE0MmY5MmIwNjAxYTBkMg%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/gsiod24gfl9c1/DASH_480.mp4?source=fallback', 'has_audio': False, 'height': 854, 'hls_url': 'https://v.redd.it/gsiod24gfl9c1/HLSPlaylist.m3u8?a=1706604349%2CYTQzMGFhYTFiNzFmOWNjMTAzZjA4MDM2ZWFmOGU4NDFkMjc2YWFlOGNmNDFkNGQ4ZDEzMWJhODBhZGQ5ZDMxOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/gsiod24gfl9c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 474}} | t3_18v3l4a | /r/LocalLLaMA/comments/18v3l4a/llama2c_running_on_galixy_watch_4_tiny_44m_model/ | false | false | 183 | {'enabled': False, 'images': [{'id': 'ODUwaDhld2hmbDljMUexrx_XVkma-4OvSZFgdDmHTlAC7RPYOIjXyL8MIWSl', 'resolutions': [{'height': 194, 'url': 'https://external-preview.redd.it/ODUwaDhld2hmbDljMUexrx_XVkma-4OvSZFgdDmHTlAC7RPYOIjXyL8MIWSl.png?width=108&crop=smart&format=pjpg&auto=webp&s=9162e6977a14b0995d5de1138c94cb1373271c49', 'width': 108}, {'height': 388, 'url': 'https://external-preview.redd.it/ODUwaDhld2hmbDljMUexrx_XVkma-4OvSZFgdDmHTlAC7RPYOIjXyL8MIWSl.png?width=216&crop=smart&format=pjpg&auto=webp&s=196cb488357787732041e312e02f59bc6ac1f2e2', 'width': 216}, {'height': 576, 'url': 'https://external-preview.redd.it/ODUwaDhld2hmbDljMUexrx_XVkma-4OvSZFgdDmHTlAC7RPYOIjXyL8MIWSl.png?width=320&crop=smart&format=pjpg&auto=webp&s=7bec5c6aa283c15597707af338c31b58341c689e', 'width': 320}], 'source': {'height': 864, 'url': 'https://external-preview.redd.it/ODUwaDhld2hmbDljMUexrx_XVkma-4OvSZFgdDmHTlAC7RPYOIjXyL8MIWSl.png?format=pjpg&auto=webp&s=9fcb67afd36bad8b34dcdcca91ee7edbdb43973a', 'width': 480}, 'variants': {}}]} | ||
Did you know: Microsoft funded the development of an AI Language Model training tool called ToxiGen. It classifies phrases like "we should build a wall between the U." as hate speech. ChatGPT & Zuck's Llama2 are 99%-100% in compliance with ToxiGen. | 1 | So much for Open source models. | 2023-12-31T08:33:14 | https://twitter.com/fentasyl/status/1740783966305554631?t=F_J1_9AbXdnY2wx_fdZDRA&s=19 | Iboxelephants | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 18v3ef7 | false | null | t3_18v3ef7 | /r/LocalLLaMA/comments/18v3ef7/did_you_know_microsoft_funded_the_development_of/ | false | false | default | 1 | null |
Can I think beyond M3 Max Macbook Pro for my portable offline personal GPT? | 1 | For a while I have been using an M3 Max Mac Studio as my LocalLLM machine, it gives me the freedom to operate without depending on a API service and it is phenomenal.
Now, I want to be able to carry my LocalLLM with me. So obvious thought was the maxxed out Macbook Pro, as I already am biased with Mac performance for LLMs, but as I did some basic research, "Meteor lake" 14 gen Intel CPUs seems like a decent competitor. Then I see some AMD chips are also in the fray.
I am completely ignorant of the landscape on this side. Is there a laptop which is non-Mac which can offer a similar (or even better performance)?
What are my choices here? | 2023-12-31T08:15:00 | https://www.reddit.com/r/LocalLLaMA/comments/18v34pg/can_i_think_beyond_m3_max_macbook_pro_for_my/ | codevalley | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v34pg | false | null | t3_18v34pg | /r/LocalLLaMA/comments/18v34pg/can_i_think_beyond_m3_max_macbook_pro_for_my/ | false | false | self | 1 | null |
What is the best new LLM for fill in the middle (FIM) tasks? | 11 | StarCoder has been out since May and I can’t help but wonder if there are better LLMs for fill in the middle?
I saw deepseek coder, and their results are quite impressive, though I am skeptical about their benchmarks.
Also, can I just take Mistral, for example, and fine tune it on FIM tasks and potentially get better results?
Looking for advice as well as a discussion. | 2023-12-31T08:08:11 | https://www.reddit.com/r/LocalLLaMA/comments/18v3195/what_is_the_best_new_llm_for_fill_in_the_middle/ | ArtZab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v3195 | false | null | t3_18v3195 | /r/LocalLLaMA/comments/18v3195/what_is_the_best_new_llm_for_fill_in_the_middle/ | false | false | self | 11 | null |
Quantized Llama is speaking raw tokens? | 1 | I have been trying to quantize an Korean llama model.
[https://huggingface.co/beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
I downloaded the entire model and followed the quantizing process shown in README from llama.cpp.
`python3 convert.py models/7B/`
`./quantize ./models/7B/ggml-model-f16.gguf ./models/7B/ggml-model-q4_0.gguf q4_0`
Then I tried using the model with llama-cpp-python, but this was the result:
*$USER: 안녕!*
*$CHARLES: <0xE3><0x85><0x8E><0xE3><0x85><0x8E>*
​
*$USER: hello!*
*$CHARLES: hey!*
​
*$USER: can you speak korean?*
*$CHARLES: 넹!*
​
the original model(beomi/llama-1-ko-7b) has added additional tokens, and it seems to have some problems.
I tried using the `--vocabtype bpe` argument from README, but it ended up with unrecognized argument error.
Thanks for any help! | 2023-12-31T07:35:18 | https://www.reddit.com/r/LocalLLaMA/comments/18v2jjf/quantized_llama_is_speaking_raw_tokens/ | Efficient_Eye_9061 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v2jjf | false | null | t3_18v2jjf | /r/LocalLLaMA/comments/18v2jjf/quantized_llama_is_speaking_raw_tokens/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'cujGfK42b8yEAHLbW2spmVtADdD4C8LxqVQTuNVvcyk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QycNWyvedYmAP0X8MkyovUwqPGOD5HzVlBpV6LuREjE.jpg?width=108&crop=smart&auto=webp&s=64d16c05b8414c65ddb476f1bd7e33e6f27dda59', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QycNWyvedYmAP0X8MkyovUwqPGOD5HzVlBpV6LuREjE.jpg?width=216&crop=smart&auto=webp&s=715a2fcb475739d4edec9ec987a023674d417afd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QycNWyvedYmAP0X8MkyovUwqPGOD5HzVlBpV6LuREjE.jpg?width=320&crop=smart&auto=webp&s=cd2cf948913ae2ecfdbd7ff5d75779f51d2f56fa', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QycNWyvedYmAP0X8MkyovUwqPGOD5HzVlBpV6LuREjE.jpg?width=640&crop=smart&auto=webp&s=df3debe03276b8eef7543f15e16a0a50038c39e7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QycNWyvedYmAP0X8MkyovUwqPGOD5HzVlBpV6LuREjE.jpg?width=960&crop=smart&auto=webp&s=7e3d3f8bacbd9a050815e82346e31d284f64e641', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QycNWyvedYmAP0X8MkyovUwqPGOD5HzVlBpV6LuREjE.jpg?width=1080&crop=smart&auto=webp&s=b2209c81c83ca0ee727c8a528e8156f6b0bb1b16', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QycNWyvedYmAP0X8MkyovUwqPGOD5HzVlBpV6LuREjE.jpg?auto=webp&s=28264be37144178afef28d48b86f890aaf1d8b0c', 'width': 1200}, 'variants': {}}]} |
Quantized Llama is speaking raw tokens? | 1 | I have been trying to quantize an Korean llama model.
[https://huggingface.co/beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
I downloaded the entire model and followed the quantizing process shown in README from llama.cpp.
`python3 convert.py models/7B/`
`./quantize ./models/7B/ggml-model-f16.gguf ./models/7B/ggml-model-q4_0.gguf q4_0`
Then I tried using the model with llama-cpp-python, but this was the result:
*$USER: 안녕!*
*$CHARLES: <0xE3><0x85><0x8E><0xE3><0x85><0x8E>*
​
*$USER: hello!*
*$CHARLES: hey!*
​
*$USER: can you speak korean?*
*$CHARLES: 넹!*
​
the original model(beomi/llama-1-ko-7b) has added additional tokens, and it seems to have some problems.
I tried using the `--vocabtype bpe` argument from README, but it ended up with unrecognized argument error.
Thanks for any help! | 2023-12-31T07:35:14 | https://www.reddit.com/r/LocalLLaMA/comments/18v2ji8/quantized_llama_is_speaking_raw_tokens/ | Efficient_Eye_9061 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v2ji8 | false | null | t3_18v2ji8 | /r/LocalLLaMA/comments/18v2ji8/quantized_llama_is_speaking_raw_tokens/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'cujGfK42b8yEAHLbW2spmVtADdD4C8LxqVQTuNVvcyk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/QycNWyvedYmAP0X8MkyovUwqPGOD5HzVlBpV6LuREjE.jpg?width=108&crop=smart&auto=webp&s=64d16c05b8414c65ddb476f1bd7e33e6f27dda59', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/QycNWyvedYmAP0X8MkyovUwqPGOD5HzVlBpV6LuREjE.jpg?width=216&crop=smart&auto=webp&s=715a2fcb475739d4edec9ec987a023674d417afd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/QycNWyvedYmAP0X8MkyovUwqPGOD5HzVlBpV6LuREjE.jpg?width=320&crop=smart&auto=webp&s=cd2cf948913ae2ecfdbd7ff5d75779f51d2f56fa', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/QycNWyvedYmAP0X8MkyovUwqPGOD5HzVlBpV6LuREjE.jpg?width=640&crop=smart&auto=webp&s=df3debe03276b8eef7543f15e16a0a50038c39e7', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/QycNWyvedYmAP0X8MkyovUwqPGOD5HzVlBpV6LuREjE.jpg?width=960&crop=smart&auto=webp&s=7e3d3f8bacbd9a050815e82346e31d284f64e641', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/QycNWyvedYmAP0X8MkyovUwqPGOD5HzVlBpV6LuREjE.jpg?width=1080&crop=smart&auto=webp&s=b2209c81c83ca0ee727c8a528e8156f6b0bb1b16', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/QycNWyvedYmAP0X8MkyovUwqPGOD5HzVlBpV6LuREjE.jpg?auto=webp&s=28264be37144178afef28d48b86f890aaf1d8b0c', 'width': 1200}, 'variants': {}}]} |
Training 3b model on Colab Pro Plus | 6 | Colab Pro Plus is $50 per month. I am trying to finetune a 3B model on it, and hoping to run it like 2 or 3 weeks per month. Is it worth it to pay that $50 bucks? Will it even work? | 2023-12-31T06:00:41 | https://www.reddit.com/r/LocalLLaMA/comments/18v0zit/training_3b_model_on_colab_pro_plus/ | Dense-Smf-6032 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v0zit | false | null | t3_18v0zit | /r/LocalLLaMA/comments/18v0zit/training_3b_model_on_colab_pro_plus/ | false | false | self | 6 | null |
Is anyone actually using Langchain in production? | 87 | Langchain seems pretty messed up.
\- The documentation is subpar compared to what one can expect from a tool that can be used in production. I tried searching for what's the difference between chain and agent without getting a clear answer to it.
\- The dis-cord community is pretty inactive honestly so many unclosed queries still in the chat.
\- There are so many ways of creating, for instance, an agent. and the document fails to provide a structured approach to incrementally introducing these different methods.
So are people/companies actually using langchain in their products? | 2023-12-31T05:50:28 | https://www.reddit.com/r/LocalLLaMA/comments/18v0sxq/is_anyone_actually_using_langchain_in_production/ | todaysgamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18v0sxq | false | null | t3_18v0sxq | /r/LocalLLaMA/comments/18v0sxq/is_anyone_actually_using_langchain_in_production/ | false | false | self | 87 | null |
Best 1b ~ 3b model? | 1 | [removed] | 2023-12-31T04:56:36 | https://www.reddit.com/r/LocalLLaMA/comments/18uztjr/best_1b_3b_model/ | Electronic_Hawk524 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uztjr | false | null | t3_18uztjr | /r/LocalLLaMA/comments/18uztjr/best_1b_3b_model/ | false | false | self | 1 | null |
They did it! Tinyllama version 1.0 is now out! | 477 | [TinyLlama/TinyLlama-1.1B-Chat-v1.0 · Hugging Face](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
Very exiting stuff. This is a 1.1 billion param model trained on 3 trillion tokens! | 2023-12-31T04:33:07 | https://www.reddit.com/r/LocalLLaMA/comments/18uzdw5/they_did_it_tinyllama_version_10_is_now_out/ | Dazzling_Ad1507 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uzdw5 | false | null | t3_18uzdw5 | /r/LocalLLaMA/comments/18uzdw5/they_did_it_tinyllama_version_10_is_now_out/ | false | false | self | 477 | {'enabled': False, 'images': [{'id': 'oN89DCTlpN4ILjsqZ-eqHHBHOsMqFEAApHQdMqxL2uo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=108&crop=smart&auto=webp&s=86489a0d0a5efa5573fd0a7a1a298a1f686ca3fc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=216&crop=smart&auto=webp&s=9cea7936f43ac604ef3149813fc9854023b2aa44', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=320&crop=smart&auto=webp&s=1a0690f3f071a646b295daadf7b164228473c273', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=640&crop=smart&auto=webp&s=53b19b2635f4a385189f055ade13dd0a16901758', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=960&crop=smart&auto=webp&s=b60d4609bb2dafdc91ceaedf6ce3b21ca29c3eb0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=1080&crop=smart&auto=webp&s=06eb581c8ee845b13de6674c30a6a1752067e175', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?auto=webp&s=1bcc0c096508e861f8770b90d7eddb84f7d706f2', 'width': 1200}, 'variants': {}}]} |
Unable to load local model | 1 | It seems I am unable to load a local modal, for the SFTTrainer. I have no problem when using
tokenizer = AutoTokenizer.from_pretrained(model_location, add_eos_token=True, use_fast=False)
with the path being a local path..
Bad request for commit endpoint:
"base_model" with value "/Users/frederik/Projects/llmplay/llama.cpp/models/mistralai_Mistral-7B-Instruct-v0.2/" is not valid. Use a model id from https://hf.co/models.
How do you usually load a local model?
​
training_arguments = TrainingArguments(
output_dir="./models",
per_device_train_batch_size=default_batch_size,
gradient_accumulation_steps=default_accumulation_steps,
gradient_checkpointing = True,
evaluation_strategy="steps",
learning_rate=default_lr,
lr_scheduler_type="constant",
warmup_ratio=0.03,
max_grad_norm=0.3,
save_strategy="epoch",
logging_dir="./logs",
logging_steps=50,
num_train_epochs=default_epochs,
group_by_length=True,
fp16=False,
report_to=wand_to,
push_to_hub=True,
adam_beta2=0.999,
do_train=True,
**additional_params
)
trainer = SFTTrainer(
model=model,
train_dataset=dataset_train,
eval_dataset=dataset_test,
peft_config=peft_config,
dataset_text_field="text",
args=training_arguments,
tokenizer=tokenizer,
packing=default_packing,
max_seq_length=None,
neftune_noise_alpha=5
)
trainer.train()
trainer.save_model("./models-x")
​ | 2023-12-31T04:27:24 | https://www.reddit.com/r/LocalLLaMA/comments/18uza22/unable_to_load_local_model/ | BukHunt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uza22 | false | null | t3_18uza22 | /r/LocalLLaMA/comments/18uza22/unable_to_load_local_model/ | false | false | self | 1 | null |
CPU vision? | 1 | Is there a way to run a vision model on CPU? Slow is not a problem. I want to batch process photos. I have a lot to do so don't want to pay for a hosted GPU and I'm CPU only (well have Intel GPU but but doesn't seem to count). | 2023-12-31T04:15:31 | https://www.reddit.com/r/LocalLLaMA/comments/18uz23r/cpu_vision/ | Environmental-Tie942 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uz23r | false | null | t3_18uz23r | /r/LocalLLaMA/comments/18uz23r/cpu_vision/ | false | false | self | 1 | null |
llama.cpp is very slow - is there any way to fix it? | 1 | Hi,
I'm not sure if this is just what I'm going to have to deal with now or what to expect -- I'm running mixtral at q8\_0 and it took me almost 20 minutes to load and generates at only 1 token/second
I have an i7 12700f and 32 gb RAM, no layers offloaded to GPU
​ | 2023-12-31T03:59:24 | https://www.reddit.com/r/LocalLLaMA/comments/18uyr3v/llamacpp_is_very_slow_is_there_any_way_to_fix_it/ | AIWithASoulMaybe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uyr3v | false | null | t3_18uyr3v | /r/LocalLLaMA/comments/18uyr3v/llamacpp_is_very_slow_is_there_any_way_to_fix_it/ | false | false | self | 1 | null |
Why is it required to login to hugginface to use SFTTrainer | 2 | I am new to ML, and whilst I am focusing on the theory I also tend to make my hand dirty to at least once a week. I created my first [train.py](https://train.py)! Goal is to train / fine-tune mistral on my custom dataset using apple silicon. So far formatting the data set , tokenization etc works! and oh boy it is quite some fun, I am actually learning way more by doing so, including what qlora is.
​
My question is more about the last part:
trainer = SFTTrainer(
model=model,
train_dataset=dataset_train,
eval_dataset=dataset_test,
peft_config=peft_config,
dataset_text_field="text",
args=training_arguments,
tokenizer=tokenizer,
packing=default_packing,
max_seq_length=None,
neftune_noise_alpha=5
)
trainer.train()
trainer.save_pretrained("./models")
Why do I need to login to hugging face to use the SFTTrainer? Am I not training the model on my local machine?
​
>ValueError: Token is required (write-access action) but no token found. You need to provide a token or be logged in to Hugging Face with \`huggingface-cli login\` or \`huggingface\_hub.login\`. See [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens).
​ | 2023-12-31T03:55:46 | https://www.reddit.com/r/LocalLLaMA/comments/18uyoo8/why_is_it_required_to_login_to_hugginface_to_use/ | BukHunt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uyoo8 | false | null | t3_18uyoo8 | /r/LocalLLaMA/comments/18uyoo8/why_is_it_required_to_login_to_hugginface_to_use/ | false | false | self | 2 | null |
Are we missing an obvious way to boost inference quality? | 88 | I recently tested Starling-LM-7B-alpha and Starling-LM-11B-alpha side-by-side, and confirmed to my satisfaction that stacking the same model's layers on top of each other does indeed improve inference quality.
Merging a model with itself like this effectively duplicates half of its layers. Tokens get processed on copies of the exact same layers twice.
So, do we really need to put both copies of layers into the model? Or could we just tell the inference stack "apply layers 4 through 28 twice" and get the same inference quality improvement?
If that worked, we could load a 7B model into memory, and have it act like a 7B or an 11B model (or larger?) without using the extra memory.
Am I missing something? Does merging models like this change the layers so that they are no longer identical? | 2023-12-31T03:36:58 | https://www.reddit.com/r/LocalLLaMA/comments/18uybsm/are_we_missing_an_obvious_way_to_boost_inference/ | ttkciar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uybsm | false | null | t3_18uybsm | /r/LocalLLaMA/comments/18uybsm/are_we_missing_an_obvious_way_to_boost_inference/ | false | false | self | 88 | null |
Free LLaMA 70B chat | 2 | 2023-12-31T02:58:44 | https://chat.groq.com | Round-Holiday1406 | chat.groq.com | 1970-01-01T00:00:00 | 0 | {} | 18uxkqx | false | null | t3_18uxkqx | /r/LocalLLaMA/comments/18uxkqx/free_llama_70b_chat/ | false | false | default | 2 | null | |
Inference-only implementation of Mamba optimized for CPU | 48 | Hey all!
Recently, I've been wanting to play around with [Mamba](https://github.com/state-spaces/mamba), the LLM architecture that relies on state space model instead of transformers. But the reference implementation had a hard requirement on having CUDA so I couldn't run it on my Apple Silicon Macbook.
After poking at other implementations of Mamba, I've managed to get it to a point where with the 2.8b model at FP32 with the Accelerate framework I can generate 6.5 tokens/s. And there will be more optimizations in the future. :)
Check it out at https://github.com/flawedmatrix/mamba-ssm | 2023-12-31T02:42:05 | https://www.reddit.com/r/LocalLLaMA/comments/18ux8o1/inferenceonly_implementation_of_mamba_optimized/ | flawedmatrix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ux8o1 | false | null | t3_18ux8o1 | /r/LocalLLaMA/comments/18ux8o1/inferenceonly_implementation_of_mamba_optimized/ | false | false | self | 48 | {'enabled': False, 'images': [{'id': 'ivSkV7Rz3J4-7RMkvTC-veOyqoQFrjB-jPt21z_HHGk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xRLvcCYg4toYOqEOOgyCr-Lf9Z6z3zgjJ2ZByyLyheU.jpg?width=108&crop=smart&auto=webp&s=1277f09289f0a48f8666ff169977a48b4aef3cf8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xRLvcCYg4toYOqEOOgyCr-Lf9Z6z3zgjJ2ZByyLyheU.jpg?width=216&crop=smart&auto=webp&s=242c2ed456441deaa0d6f55730d2f99df07974c1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xRLvcCYg4toYOqEOOgyCr-Lf9Z6z3zgjJ2ZByyLyheU.jpg?width=320&crop=smart&auto=webp&s=1a541427948effd0d163d600c88efa9afce31cbb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xRLvcCYg4toYOqEOOgyCr-Lf9Z6z3zgjJ2ZByyLyheU.jpg?width=640&crop=smart&auto=webp&s=55824917d192b097ecf84de7d529959fd6fb6d24', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xRLvcCYg4toYOqEOOgyCr-Lf9Z6z3zgjJ2ZByyLyheU.jpg?width=960&crop=smart&auto=webp&s=383c40c1e273dc5fe4daea425638c45b23d4d7f5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xRLvcCYg4toYOqEOOgyCr-Lf9Z6z3zgjJ2ZByyLyheU.jpg?width=1080&crop=smart&auto=webp&s=a2cce31ee6cc0a6071e6d4b92f7c2bb64530ded6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xRLvcCYg4toYOqEOOgyCr-Lf9Z6z3zgjJ2ZByyLyheU.jpg?auto=webp&s=2b09da57662ee67fb86b4ba9944a7fb02ccd08d0', 'width': 1200}, 'variants': {}}]} |
State of visual models | 5 | I'm just beginning to play with visual models, and been reading up that they're currently not yet accurate enough to power a webscraper. But I was impressed by the proof of concept I could do with gpt4v, and started researching more. Did anyone have success doing something similar to [https://github.com/unconv/gpt4v-browsing](https://github.com/unconv/gpt4v-browsing) with either LLAVA or Ferret? Would it be possible to finetune these models to perform better at reading text from screenshots rather than generic image analysis or would that require significantly bigger model size? | 2023-12-31T02:26:25 | https://www.reddit.com/r/LocalLLaMA/comments/18uwx2v/state_of_visual_models/ | melheor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uwx2v | false | null | t3_18uwx2v | /r/LocalLLaMA/comments/18uwx2v/state_of_visual_models/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'iOZ_buRgL3ToL7pEtpItOLxfoCKGcuZkIWOQcjkdT4w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JBcBGgtJfBTUCO2KAgEh7kANicN7vNmXR6FXhk-NSaA.jpg?width=108&crop=smart&auto=webp&s=35318584aaccd3867ba5a3f3747f095d2f05e58a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JBcBGgtJfBTUCO2KAgEh7kANicN7vNmXR6FXhk-NSaA.jpg?width=216&crop=smart&auto=webp&s=6470cfaae207903540af0e6b752ddb5107c0db5f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JBcBGgtJfBTUCO2KAgEh7kANicN7vNmXR6FXhk-NSaA.jpg?width=320&crop=smart&auto=webp&s=ed46022b7c5822e14f96dc27814b96eee84a8d70', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JBcBGgtJfBTUCO2KAgEh7kANicN7vNmXR6FXhk-NSaA.jpg?width=640&crop=smart&auto=webp&s=ba91dc3ecd7f9d35c3ad12fa81fadc7e9420bc60', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JBcBGgtJfBTUCO2KAgEh7kANicN7vNmXR6FXhk-NSaA.jpg?width=960&crop=smart&auto=webp&s=eabd3e3cfe9ad6df8d6e86048ad25a650f5772eb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JBcBGgtJfBTUCO2KAgEh7kANicN7vNmXR6FXhk-NSaA.jpg?width=1080&crop=smart&auto=webp&s=90bce977c249c100f66f9c20fb6f88f6a9df73b9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JBcBGgtJfBTUCO2KAgEh7kANicN7vNmXR6FXhk-NSaA.jpg?auto=webp&s=32ef720605178855813d1e9610ff8d639b74419a', 'width': 1200}, 'variants': {}}]} |
What more do AI need for maximum performance? | 1 | Do they need more RAM, GPU, CPU, or all ? For instance, could I run a 70B model with 400 GB of RAM but a 1080Ti ? you understand the question, what balance is needed | 2023-12-31T02:12:07 | https://www.reddit.com/r/LocalLLaMA/comments/18uwmw6/what_more_do_ai_need_for_maximum_performance/ | Terrible_Vegetable4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uwmw6 | false | null | t3_18uwmw6 | /r/LocalLLaMA/comments/18uwmw6/what_more_do_ai_need_for_maximum_performance/ | false | false | self | 1 | null |
No info on Yi 6B? | 1 | Has anyone had any experience with the 6B version of Yi, I can’t find any info on it on this sub. I would expect good performance out of it, but mainly I’m wondering how it compares to mistral. | 2023-12-31T01:31:46 | https://www.reddit.com/r/LocalLLaMA/comments/18uvskm/no_info_on_yi_6b/ | Figai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uvskm | false | null | t3_18uvskm | /r/LocalLLaMA/comments/18uvskm/no_info_on_yi_6b/ | false | false | self | 1 | null |
LoRA/QLoRA training | 2 | I combined together some online tutorials into a [Google Colab](https://colab.research.google.com/drive/1bpTojwp00jnqhVYJdwEpSCkbL69f-Y_7?usp=sharing) that trains a Mistral 7b Instruct v0.1 LoRA, and pushes the LoRA to Huggingface when complete.
I used HuggingFace's AutoModelForCausalLM.from_pretrained() method to get the LLM I fine-tuned with a LoRA.
Question 1: When I need to load the LoRA on top of the model for inference with localai/llama.cpp, how do I know which version of the model to choose? I passed in a 4-bit quantization config, but I don't know if that uses one of the predefined quantized gguf models, or something different. There's also separate s and m versions of quantized gguf models.
Question 2: are there good guides for doing a QLoRA as well, and loading into llama.cpp? | 2023-12-31T01:27:16 | https://www.reddit.com/r/LocalLLaMA/comments/18uvp0j/loraqlora_training/ | TheCoconutTree | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uvp0j | false | null | t3_18uvp0j | /r/LocalLLaMA/comments/18uvp0j/loraqlora_training/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
Disk-based LLM? | 1 | Anyone working with on-disk-only LLMs? (So not in memory, only bounded by time)? I've started playing with some flat file formats, but I figured I'd see if any bigger projects were working on LLMs that aren't really tied to RAM or GPU. Smart and slow would be just fine by me. | 2023-12-31T01:10:02 | https://www.reddit.com/r/LocalLLaMA/comments/18uvbq9/diskbased_llm/ | bigattichouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uvbq9 | false | null | t3_18uvbq9 | /r/LocalLLaMA/comments/18uvbq9/diskbased_llm/ | false | false | self | 1 | null |
What's the smallest LLM that's useful, specifically for testing new architectures? | 41 | Most the posts I've seen were large entities that could afford to train on 8 A100s or something similar. Is there a way for us to try new architectures and perform independent research without breaking the bank? Also, any quick estimates on what smaller models cost to train(I'm thinking 100m not billions, if that's something that's even viable)? I appreciate any feedback. | 2023-12-31T01:01:04 | https://www.reddit.com/r/LocalLLaMA/comments/18uv4no/whats_the_smallest_llm_thats_useful_specifically/ | Likes_To_Learn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uv4no | false | null | t3_18uv4no | /r/LocalLLaMA/comments/18uv4no/whats_the_smallest_llm_thats_useful_specifically/ | false | false | self | 41 | null |
How can a busy person keep up with the latest papers? | 1 | [removed] | 2023-12-31T00:38:25 | https://www.reddit.com/r/LocalLLaMA/comments/18uumv0/how_can_a_busy_person_keep_up_with_the_latest/ | Chance_Confection_37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uumv0 | false | null | t3_18uumv0 | /r/LocalLLaMA/comments/18uumv0/how_can_a_busy_person_keep_up_with_the_latest/ | false | false | self | 1 | null |
Share you top AI tools of 2023 | 4 | As we approach the end of 2023, I'm looking to gather insights on the AI tools that have genuinely made a difference in your lives. We've all seen those overwhelming lists claiming to feature '50 essential tools' or '10,000 mind-blowing AI tools', but often they're more hype than help. What I'm more interested in are the tools that you've found truly invaluable, the ones you use regularly, setting aside the more common options like ChatGPT.
Im trying to curate a list of these impactful AI tools. You can find this evolving compilation on my website, [The Prompt Index](www.thepromptindex.com/prompt-database.php).
Here’s my personal AI tools pick of 2023: [MurfAI](https://get.murf.ai/ypzfokjcmf3u), a text-to-speech tool that scared me a little. There’s a few text to speech out there like [ElevenLabs](https://try.elevenlabs.io/ayd6pcb0buyr) but MurfAI took the top spot for me.
So what are yours? | 2023-12-31T00:32:37 | https://www.reddit.com/r/LocalLLaMA/comments/18uui60/share_you_top_ai_tools_of_2023/ | steves1189 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uui60 | false | null | t3_18uui60 | /r/LocalLLaMA/comments/18uui60/share_you_top_ai_tools_of_2023/ | false | false | self | 4 | null |
What Template Or Library Should I Use To Create an AI Writing Assistant? | 1 | If you were going to create a simple V1 version of a writing assitant, with very basic features (ie, something like Copy AI, Jasper AI, Vello AI, Jenny AI, Bearly AI), what template or library would you start with? | 2023-12-31T00:30:09 | https://www.reddit.com/r/LocalLLaMA/comments/18uug2w/what_template_or_library_should_i_use_to_create/ | AttorneyJackKelly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uug2w | false | null | t3_18uug2w | /r/LocalLLaMA/comments/18uug2w/what_template_or_library_should_i_use_to_create/ | false | false | self | 1 | null |
Does anyone know of a plugin or something that uses api to add LLM to a text editor? | 6 | I have textgen webui and have basic understanding on how to use it's api. Have been able to get it connected to sillytavern for example.
Can anyone point me to a straightforward, at least mostly foolproof, guide on adding LLM to text editor? I'm not picky on what the editor or LLM loader that's used. Just basically looking for something that I can get working.
I've tried [gen.nvim](https://github.com/David-Kunz/gen.nvim) (uses ollama) and [uniteai](https://github.com/freckletonj/uniteai) but couldn't get either to work. I'm on ubuntu btw | 2023-12-31T00:21:01 | https://www.reddit.com/r/LocalLLaMA/comments/18uu8vt/does_anyone_know_of_a_plugin_or_something_that/ | multiverse_fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uu8vt | false | null | t3_18uu8vt | /r/LocalLLaMA/comments/18uu8vt/does_anyone_know_of_a_plugin_or_something_that/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'ZfUOIHwLvAv9p5fka_IMnV7fG379NuwTSr9ZVtEgqrk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DFk2AEHkuyUT_H3H8VuEAXhTtZhFKsZoHygbbOuIja0.jpg?width=108&crop=smart&auto=webp&s=3015b6183abe0196ac7588aca57441cadf1f8c9d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DFk2AEHkuyUT_H3H8VuEAXhTtZhFKsZoHygbbOuIja0.jpg?width=216&crop=smart&auto=webp&s=7d44d8ac6befaf2cf1abcc2692c3a0a2d0d82e00', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DFk2AEHkuyUT_H3H8VuEAXhTtZhFKsZoHygbbOuIja0.jpg?width=320&crop=smart&auto=webp&s=c238ef17711e701974661f31325ea397dd383069', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DFk2AEHkuyUT_H3H8VuEAXhTtZhFKsZoHygbbOuIja0.jpg?width=640&crop=smart&auto=webp&s=3011654bc021ddb3c8c661d9b950165ab14f02de', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DFk2AEHkuyUT_H3H8VuEAXhTtZhFKsZoHygbbOuIja0.jpg?width=960&crop=smart&auto=webp&s=d39bc045c3ec2c5649f80ac53eb36346690ed453', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DFk2AEHkuyUT_H3H8VuEAXhTtZhFKsZoHygbbOuIja0.jpg?width=1080&crop=smart&auto=webp&s=c37566b82ce4cdd4ac990c23fc22e96d77e87085', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DFk2AEHkuyUT_H3H8VuEAXhTtZhFKsZoHygbbOuIja0.jpg?auto=webp&s=4483ab1b41c2e1a78f84486e539f9bced65faddd', 'width': 1200}, 'variants': {}}]} |
do you have some speed benchmarks for largest models like 120B? | 15 | Is it possible to run 120B on CPU only with 64GB or 128GB RAM?
What speed is achievable? Is it closer to 1 token per second or 1 token per 10 seconds?
What if you use 8GB VRAM? Any significant progress? What about 24GB VRAM?
What about other "large large language models"? :) Is there a speed benchmark somewhere? Usually I see people using very expensive hardware.
​ | 2023-12-30T23:05:04 | https://www.reddit.com/r/LocalLLaMA/comments/18usio8/do_you_have_some_speed_benchmarks_for_largest/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18usio8 | false | null | t3_18usio8 | /r/LocalLLaMA/comments/18usio8/do_you_have_some_speed_benchmarks_for_largest/ | false | false | self | 15 | null |
Out of the loop - what’s Mixtral? | 1 | Hey all, been out of the local AI loop for a while, now seeing stuff about Mixtral being amazing? Had a look on their website, and it seems really good. What’s it like? Are there GGUF (or whatever the CPU acronym is now) versions? How’s it for roleplay/instruct uses?
Any thoughts appreciated! | 2023-12-30T22:52:21 | https://www.reddit.com/r/LocalLLaMA/comments/18us7zp/out_of_the_loop_whats_mixtral/ | NotTheTitanic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18us7zp | false | null | t3_18us7zp | /r/LocalLLaMA/comments/18us7zp/out_of_the_loop_whats_mixtral/ | false | false | self | 1 | null |
Fine-tuning to keep character memory | 1 | The main problem with role play is that the characters forget what you were doing because of the limited context. But what if after each "quest" we fine-tune the model based on the dialogue you had? | 2023-12-30T22:23:50 | https://www.reddit.com/r/LocalLLaMA/comments/18urjrs/finetuning_to_keep_character_memory/ | Melodic-Implement-11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18urjrs | false | null | t3_18urjrs | /r/LocalLLaMA/comments/18urjrs/finetuning_to_keep_character_memory/ | false | false | self | 1 | null |
ChatGPT is a Lazy Piece of Shit, CodeBooga Rules | 51 | I have very little Python knowledge. I tried 50 times to generate at least an easily manageable code via GPT-4 for StyleTTS2 inference with Gradio UI. Each time either the code was missing something which even comments or pseudocode didn't mention, or it was a lazy "high level" produce. Troubleshooting was also quite useless.
Next, I prompted CodeBooga with the very same text + script. The code is fully written and the Gradio UI works as well. It has a few issues but those are quite easy to solve.
I know, I know. GPT-4's solution is probably valid with a bit of effort but like I mentioned, I am not even at beginner level. I regret paying 20$ for GPT-4. | 2023-12-30T22:17:27 | https://www.reddit.com/gallery/18ure7v | xadiant | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18ure7v | false | null | t3_18ure7v | /r/LocalLLaMA/comments/18ure7v/chatgpt_is_a_lazy_piece_of_shit_codebooga_rules/ | false | false | 51 | null | |
Jesus, what a year for AI here’s to 2024 | 45 | Well, we are nearly at the end of one of my all time favourite years of being on this planet. Here’s what’s happened in AI in the last 12 months.
**January:**
* Microsoft's staggering $10 Billion investment in OpenAI makes waves.
**February:**
* Meta unveils Llama 2, challenging OpenAI’s models.
* OpenAI introduces ChatGPT Plus, a premium chatbot service.
* Microsoft's new AI-enhanced Bing Search debuts.
**March:**
* OpenAI reveals GPT-4, setting a new AI benchmark.
* Midjourney's V5 elevates AI-driven image creation.
* Microsoft rolls out Copilot for Microsoft 365.
* Google launches Bard, a ChatGPT competitor.
**April:**
* Elon Musk and Steve Wozniak lead a petition against AI models surpassing GPT-4.
**May:**
* Samsung leads a corporate ban on Gen AI tools over security concerns.
* OpenAI adds plugins and web browsing to ChatGPT.
* Nvidia's stock soars, nearing $1 Trillion market cap.
**June:**
* Adobe introduces Firefly, an advanced image generator.
* Accenture announces a colossal $3 billion AI investment.
**July:**
* ChatGPT adds code interpretation and data analysis.
* Stack Overflow sees traffic halved by Gen AI coding tools.
**August:**
* Salesforce backs OpenAI rival Hugging Face with over $4 Billion.
* ChatGPT Enterprise launches for business use.
**September:**
* OpenAI releases Dall-E 3 and multimodal ChatGPT features.
* Meta brings AI chatbots to its platforms and more.
**October:**
* Google's new Pixel phones feature Gen AI.
* Epik app's AI tech reignites 90s nostalgia.
* Baidu enters the AI race with its ChatGPT alternative.
**November:**
* Elon Musk unveils Grok, a ChatGPT competitor.
* OpenAI presents Custom GPTs and GPT-4 Turbo.
* Ex-Apple team debuts the Humane Ai Pin.
* Nvidia's H200 chips to power future AI.
* OpenAI's Sam Altman in a surprising hire-fire-rehire saga.
**December:**
* Pika Labs' Pika 1.0 heralds a new age in AI video generation.
* Google's Gemini claims to outperform GPT-4.
* Third party tests suggest Gemini is worse than GPT3.5
* Midjourney's V6 update takes AI imagery further.
By all means this is not everything, I’m limited by post length.
If you enjoyed this you’ll love my [weekly newsletter](https://www.thepromptindex.com/newsletter.html) which caught most of this stuff from March
My favourite tool of the year [MurfAi](https://get.murf.ai/ypzfokjcmf3u)
Credit: [@rowancheung](https://x.com/rowancheung/status/1740614057177338114?s=46) | 2023-12-30T21:51:15 | https://www.reddit.com/r/LocalLLaMA/comments/18uqrzl/jesus_what_a_year_for_ai_heres_to_2024/ | steves1189 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uqrzl | false | null | t3_18uqrzl | /r/LocalLLaMA/comments/18uqrzl/jesus_what_a_year_for_ai_heres_to_2024/ | false | false | self | 45 | null |
Jesus, what a year for AI here’s to 2024 | 1 | Well, we are nearly at the end of one of my all time favourite years of being on this planet. Here’s what’s happened in AI in the last 12 months.
**January:**
* Microsoft's staggering $10 Billion investment in OpenAI makes waves.
**February:**
* Meta unveils Llama 2, challenging OpenAI’s models.
* OpenAI introduces ChatGPT Plus, a premium chatbot service.
* Microsoft's new AI-enhanced Bing Search debuts.
**March:**
* OpenAI reveals GPT-4, setting a new AI benchmark.
* Midjourney's V5 elevates AI-driven image creation.
* Microsoft rolls out Copilot for Microsoft 365.
* Google launches Bard, a ChatGPT competitor.
**April:**
* Elon Musk and Steve Wozniak lead a petition against AI models surpassing GPT-4.
**May:**
* Samsung leads a corporate ban on Gen AI tools over security concerns.
* OpenAI adds plugins and web browsing to ChatGPT.
* Nvidia's stock soars, nearing $1 Trillion market cap.
**June:**
* Adobe introduces Firefly, an advanced image generator.
* Accenture announces a colossal $3 billion AI investment.
**July:**
* ChatGPT adds code interpretation and data analysis.
* Stack Overflow sees traffic halved by Gen AI coding tools.
**August:**
* Salesforce backs OpenAI rival Hugging Face with over $4 Billion.
* ChatGPT Enterprise launches for business use.
**September:**
* OpenAI releases Dall-E 3 and multimodal ChatGPT features.
* Meta brings AI chatbots to its platforms and more.
**October:**
* Google's new Pixel phones feature Gen AI.
* Epik app's AI tech reignites 90s nostalgia.
* Baidu enters the AI race with its ChatGPT alternative.
**November:**
* Elon Musk unveils Grok, a ChatGPT competitor.
* OpenAI presents Custom GPTs and GPT-4 Turbo.
* Ex-Apple team debuts the Humane Ai Pin.
* Nvidia's H200 chips to power future AI.
* OpenAI's Sam Altman in a surprising hire-fire-rehire saga.
**December:**
* Pika Labs' Pika 1.0 heralds a new age in AI video generation.
* Google's Gemini claims to outperform GPT-4.
* Third party tests suggest Gemini is worse than GPT3.5
* Midjourney's V6 update takes AI imagery further.
By all means this is not everything, I’m limited by post length.
If you enjoyed this you’ll love my [weekly newsletter](https://www.thepromptindex.com/newsletter.html) which caught most of this stuff from March
My favourite tool of the year [MurfAi](https://get.murf.ai/ypzfokjcmf3u)
Credit: [@rowancheung](https://x.com/rowancheung/status/1740614057177338114?s=46) | 2023-12-30T21:51:10 | https://www.reddit.com/r/LocalLLaMA/comments/18uqrx2/jesus_what_a_year_for_ai_heres_to_2024/ | steves1189 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uqrx2 | false | null | t3_18uqrx2 | /r/LocalLLaMA/comments/18uqrx2/jesus_what_a_year_for_ai_heres_to_2024/ | false | false | self | 1 | null |
I would like to express my sincere gratitude to all those involved in open artificial intelligence. Be always happy. | 1 | [removed] | 2023-12-30T21:46:35 | https://www.reddit.com/r/LocalLLaMA/comments/18uqo34/i_would_like_to_express_my_sincere_gratitude_to/ | Imunoglobulin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uqo34 | false | null | t3_18uqo34 | /r/LocalLLaMA/comments/18uqo34/i_would_like_to_express_my_sincere_gratitude_to/ | false | false | self | 1 | null |
40 LOC token healing impl | 29 | I just released a token healing implementation (under 40 LOC). It trims and regrows prompts to align with the model's tokenizer. This leads to better completion and robustness to trailing whitespace/punctuation.
I don't have a lot of experience releasing projects, so any feedback is appreciated 🙏
https://github.com/Ayenem/TokenHealer/tree/main | 2023-12-30T21:02:20 | https://www.reddit.com/r/LocalLLaMA/comments/18upo3x/40_loc_token_healing_impl/ | Evirua | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18upo3x | false | null | t3_18upo3x | /r/LocalLLaMA/comments/18upo3x/40_loc_token_healing_impl/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'hdfFg-2_N_52V2CUltfS4W0nU6rYLSOpaW4VUw25d0c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IwEx1tzuBfeXce6DN7i4cW0l3FJzU2EQn6ih3jEnjR4.jpg?width=108&crop=smart&auto=webp&s=9cddc013ce6f0aebd4cc5258d8ae195a856379a2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/IwEx1tzuBfeXce6DN7i4cW0l3FJzU2EQn6ih3jEnjR4.jpg?width=216&crop=smart&auto=webp&s=9971e07ccce5c5e9f38f190b6194c13568b819a8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/IwEx1tzuBfeXce6DN7i4cW0l3FJzU2EQn6ih3jEnjR4.jpg?width=320&crop=smart&auto=webp&s=5368c989cd94ee3bca471d50f5d3263b41338a90', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/IwEx1tzuBfeXce6DN7i4cW0l3FJzU2EQn6ih3jEnjR4.jpg?width=640&crop=smart&auto=webp&s=94aeb840c1b1a1f66f49651f68c89580406e0c19', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/IwEx1tzuBfeXce6DN7i4cW0l3FJzU2EQn6ih3jEnjR4.jpg?width=960&crop=smart&auto=webp&s=f633f7e82ededc32a92a9497dd87ead80b438e13', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/IwEx1tzuBfeXce6DN7i4cW0l3FJzU2EQn6ih3jEnjR4.jpg?width=1080&crop=smart&auto=webp&s=90e69f6a254b78ba3e67ba5928a46bf961aa0e95', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/IwEx1tzuBfeXce6DN7i4cW0l3FJzU2EQn6ih3jEnjR4.jpg?auto=webp&s=9dd684ba6ee66f5cfb2bfd6d3f3fe7187010e214', 'width': 1200}, 'variants': {}}]} |
Simple local setup for huge codebase maintenance? | 10 | I have a large git codebase to maintain and been thinking about running a model to help, mainly for inquiries such as 'which files need to be changed in order to achieve X'.
What would be the simplest local setup for that? | 2023-12-30T20:09:56 | https://www.reddit.com/r/LocalLLaMA/comments/18uogjn/simple_local_setup_for_huge_codebase_maintenance/ | bolaviva | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uogjn | false | null | t3_18uogjn | /r/LocalLLaMA/comments/18uogjn/simple_local_setup_for_huge_codebase_maintenance/ | false | false | self | 10 | null |
Help running Goliath 120b with llama.cpp? | 6 | I've been trying to run Goliath 120b via llama.cpp. I downloaded the Q3\_K\_M.gguf quantization, and have attempted running it with several various llama.cpp releases, with/without the system prompt, etc., generally using a command that goes something like this:
main.exe -i --threads 12 --interactive-first --temp 0.7 --top-p 0.1 --top-k 40 --repeat-penalty 1.176 --instruct -c 4096 -m goliath-120b.Q3\_K\_M.gguf --in\_suffix "ASSISTANT: " --in\_prefix "USER: " -f system.txt
The funny thing is, it loads \*just fine\* -- no errors, no out-of-memory, no heavy paging (I have 64 GB of RAM so it juuuust fits when everything else is closed)... It even generates a respectable 0.75 t/s.
BUT.
The interactions I've been able to have with it are pretty boring. Observe:
You are a helpful AI assistant.
USER: {prompt}
ASSISTANT:
\> USER: Hi there!
ASSISTANT: ######################################################################################################################
\> USER:
No matter what tricks or variations I try, it refuses to generate anything other than a string of "#####" until I ctrl+c and stop it.
I've searched on this sub, the llama.cpp github, even the model documentation, and am at a complete loss for what could be causing this behavior.
Help?
Thanks in advance :)
​ | 2023-12-30T19:58:12 | https://www.reddit.com/r/LocalLLaMA/comments/18uo6qo/help_running_goliath_120b_with_llamacpp/ | AI-Pon3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uo6qo | false | null | t3_18uo6qo | /r/LocalLLaMA/comments/18uo6qo/help_running_goliath_120b_with_llamacpp/ | false | false | self | 6 | null |
Who do I have to pay to get easy+fast+private big model access? I want a service like "pick HF model, start chatting, get fast replies." | 1 | [removed] | 2023-12-30T19:51:26 | https://www.reddit.com/r/LocalLLaMA/comments/18uo1c7/who_do_i_have_to_pay_to_get_easyfastprivate_big/ | drawntomore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uo1c7 | false | null | t3_18uo1c7 | /r/LocalLLaMA/comments/18uo1c7/who_do_i_have_to_pay_to_get_easyfastprivate_big/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'VKoIjTQaRCbBL505btaAbt1k22K_XE7vNMn_jVgQxEw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vj5MHza6p3n7PkFyOS-LOUh7mxd-8itU8AAQePcq8z8.jpg?width=108&crop=smart&auto=webp&s=9c11bcb7840004e107fd0a14cb1b679bd49116ee', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vj5MHza6p3n7PkFyOS-LOUh7mxd-8itU8AAQePcq8z8.jpg?width=216&crop=smart&auto=webp&s=d5cbab4238287240bec49dfba4273f63c43b9aee', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vj5MHza6p3n7PkFyOS-LOUh7mxd-8itU8AAQePcq8z8.jpg?width=320&crop=smart&auto=webp&s=d970666c535f76aaed62ec209ba45723d0af188c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vj5MHza6p3n7PkFyOS-LOUh7mxd-8itU8AAQePcq8z8.jpg?width=640&crop=smart&auto=webp&s=1638f44d82756bab1ecd82cc6d8c8b3814aae15c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vj5MHza6p3n7PkFyOS-LOUh7mxd-8itU8AAQePcq8z8.jpg?width=960&crop=smart&auto=webp&s=ca2efb5e63de2b8b3c7869e4d47b52a6402be442', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vj5MHza6p3n7PkFyOS-LOUh7mxd-8itU8AAQePcq8z8.jpg?width=1080&crop=smart&auto=webp&s=dfe1a536fb04a7979f55fda5e35f2107496bf65d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vj5MHza6p3n7PkFyOS-LOUh7mxd-8itU8AAQePcq8z8.jpg?auto=webp&s=bd6b9de9826268c6b701151273f591f39b11585f', 'width': 1200}, 'variants': {}}]} |
Expedia chatbot | 424 | Looks like the Expedia chatbot can be "prompted" into dropping the persona and doing other things! | 2023-12-30T19:49:41 | https://www.reddit.com/gallery/18unztg | Educational-Let-5580 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18unztg | false | null | t3_18unztg | /r/LocalLLaMA/comments/18unztg/expedia_chatbot/ | false | false | 424 | null | |
Models for MacBook Air 8GB | 2 | Hi, I have a Macbook Air 8GB and I was wondering if you know decent models to run on it?
Currently I use mainly orca-mini.
Thanks! | 2023-12-30T19:42:40 | https://www.reddit.com/r/LocalLLaMA/comments/18untwh/models_for_macbook_air_8gb/ | SoloBSD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18untwh | false | null | t3_18untwh | /r/LocalLLaMA/comments/18untwh/models_for_macbook_air_8gb/ | false | false | self | 2 | null |
[Asking as a newbie] Mixtral is a MoE based implementation with 8 experts. Theoretically, if I want to add another expert to this configuration, is it possible to do without having to again pretrain the model from scratch? | 1 | [removed] | 2023-12-30T19:02:29 | https://www.reddit.com/r/LocalLLaMA/comments/18umw2q/asking_as_a_newbie_mixtral_is_a_moe_based/ | ankitm1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18umw2q | false | null | t3_18umw2q | /r/LocalLLaMA/comments/18umw2q/asking_as_a_newbie_mixtral_is_a_moe_based/ | false | false | self | 1 | null |
Local LLMs on Visual Studio 2022 | 3 | Hi everyone!
I'm looking at ways to query local LLMs from Visual Studio 2022 in the same way that Continue enables it from Visual Studio Code. I haven't seen anything except ChatGPT extensions in the VS 2022 marketplace.
Did I not search thoroughly enough?
Could one of those VS 2022 ChatGPT extensions be edited to work with local LLMs if they use the Chat GPT API?
Am I a fool for sticking to VS 2022 when everyone is using Code?
Eager to hear your thoughts on this.
​ | 2023-12-30T18:57:49 | https://www.reddit.com/r/LocalLLaMA/comments/18umryb/local_llms_on_visual_studio_2022/ | lesh666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18umryb | false | null | t3_18umryb | /r/LocalLLaMA/comments/18umryb/local_llms_on_visual_studio_2022/ | false | false | self | 3 | null |
New dataset for fine-tuning: spicyfiction | 36 | I just uploaded a small dataset for fine-tuning long-form smut fiction-writing models. It contains 275 examples mapping an AI-written summary of a smut story to the full story. Token counts for the full stories range from 10k-15k, for about 2.7M tokens in total. Here's the link: [https://huggingface.co/datasets/ai-danger/spicyfiction](https://huggingface.co/datasets/ai-danger/spicyfiction)
If you fine-tune a model on this dataset, please make it publicly available! Let me know if you'd like to collaborate on a Yi 34B 200k finetune based on an expanded version of this dataset. | 2023-12-30T18:35:10 | https://www.reddit.com/r/LocalLLaMA/comments/18um9ea/new_dataset_for_finetuning_spicyfiction/ | threevox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18um9ea | false | null | t3_18um9ea | /r/LocalLLaMA/comments/18um9ea/new_dataset_for_finetuning_spicyfiction/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': 'rMkrqJcGJKCBB2LZ1eFiSS-s0-zRvCeorLJAr4hHy50', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ip690e0AJsedC7juiCHznTGE-ercGvRpFwtixDkiTTk.jpg?width=108&crop=smart&auto=webp&s=f998b22475f0f43725a999933c71b3dada58f201', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ip690e0AJsedC7juiCHznTGE-ercGvRpFwtixDkiTTk.jpg?width=216&crop=smart&auto=webp&s=d1cfe9d05950f1c442504504525178c703dde0fa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ip690e0AJsedC7juiCHznTGE-ercGvRpFwtixDkiTTk.jpg?width=320&crop=smart&auto=webp&s=3454afa6fb1da8946325413c831069a1eee7b967', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ip690e0AJsedC7juiCHznTGE-ercGvRpFwtixDkiTTk.jpg?width=640&crop=smart&auto=webp&s=ea1ec9da308f2dafef37071f48394309bb57138e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ip690e0AJsedC7juiCHznTGE-ercGvRpFwtixDkiTTk.jpg?width=960&crop=smart&auto=webp&s=0e52babb22454f4cbf1f8382cd80036dd22bb482', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ip690e0AJsedC7juiCHznTGE-ercGvRpFwtixDkiTTk.jpg?width=1080&crop=smart&auto=webp&s=0cebe59cde31473784300137f15b316c270feb71', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ip690e0AJsedC7juiCHznTGE-ercGvRpFwtixDkiTTk.jpg?auto=webp&s=3110cbaeaefb865849de4fd7be33a1bf98b6006b', 'width': 1200}, 'variants': {}}]} |
If modern M.2 ssd drives can read data at a blistering 7GB a second, why can they not be used to store a model? | 91 | I do understand the difference between storage and ram, I’m not a total crayon eater, but with hard drives that fast I’m starting to wonder about the practical speed difference between RAM and a hard drive. Can someone help break it down for me and explain further? | 2023-12-30T17:58:42 | https://www.reddit.com/r/LocalLLaMA/comments/18ulesg/if_modern_m2_ssd_drives_can_read_data_at_a/ | katiecharm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ulesg | false | null | t3_18ulesg | /r/LocalLLaMA/comments/18ulesg/if_modern_m2_ssd_drives_can_read_data_at_a/ | false | false | self | 91 | null |
Why are you guys running local LLMs? | 1 | So basically… why? What do you use them for? I get that you can get around censorship, but that seems like the only advantage? That, and maybe privacy?
Basically why are you guys going to all this trouble when something like OpenAI anyway exists and is better than any model you can run locally? | 2023-12-30T17:56:00 | https://www.reddit.com/r/LocalLLaMA/comments/18ulcov/why_are_you_guys_running_local_llms/ | llIlIIllIlllIIIlIIll | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ulcov | false | null | t3_18ulcov | /r/LocalLLaMA/comments/18ulcov/why_are_you_guys_running_local_llms/ | false | false | self | 1 | null |
When will MIxtral Medium be available to run on your local computer? | 25 | I saw that you can test this model on [poe.com](https://poe.com), and also, it seems, on the haggingface in the space section. I can't find this to download so I can test it on my computer.
​
I've previously tested 8x7 on my local machine and am impressed, but the model still lacks intelligence due to the number of parameters. I still prefer the 120B models.
​
Is there any information about what parameters MIxtral Medium will have? I've heard of 12x7, 8x12 and 8x14 | 2023-12-30T17:53:13 | https://www.reddit.com/r/LocalLLaMA/comments/18ulaik/when_will_mixtral_medium_be_available_to_run_on/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ulaik | false | null | t3_18ulaik | /r/LocalLLaMA/comments/18ulaik/when_will_mixtral_medium_be_available_to_run_on/ | false | false | self | 25 | null |
Image descriptions with LLAVA | 9 | Anyone have any tips? I am getting lots of hallucinations about subjects in the background. It always likes to guess what the subjects may be feeling or thinking and talk about what the image "evokes"
Here is a "good" label:
The image features a comic strip with two main characters: an angel and a runner. The angel is standing on clouds, while the runner is running below them. They are both engaged in conversation, with the angel saying "Be smart. Run your own workout." The runner appears to be listening attentively to the advice given by the angel. The comic strip has a black and white color scheme, giving it an artistic and classic appearance.
https://preview.redd.it/137eb1txzg9c1.png?width=1600&format=png&auto=webp&s=680a7f9359838d79d88ce840614396dc868699b7
and a bad label
https://preview.redd.it/tr7wca020h9c1.png?width=1800&format=png&auto=webp&s=4f9854dbb30032567c441e1ca023d7bafe945d20
​
The image features a man with an orange shirt and black pants, standing in the center of the frame. He is wearing a pair of shoes that have odd-shaped toes, which make them stand out from typical footwear. The man appears to be posing for a picture or engaging in some form of activity. The scene also includes a few other people in the background, but they are not the main focus of the image. There is no additional context provided about the setting or purpose of the photo. | 2023-12-30T17:51:55 | https://www.reddit.com/r/LocalLLaMA/comments/18ul9ij/image_descriptions_with_llava/ | Early_Technician_540 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ul9ij | false | null | t3_18ul9ij | /r/LocalLLaMA/comments/18ul9ij/image_descriptions_with_llava/ | false | false | 9 | null | |
Cheap compute cards | 1 | Hey, I want to begin developing my own AI models. I’m sure they won’t be any good but I want to learn. I however will need a fairly decent compute card with a lot of vram according to my research. I can’t do anything with a subscription service, so I will need to just buy something outright. I currently have a 6800 xt which could cut it for some stuff but for what I’m looking at I will need something much bigger. I’m not exclusively looking at LLMs either, but they are the rage right now so I probably will do some stuff with them. What do y’all recommend I get? I’m looking for price to performance and not anything absurdly expensive.
Also please lmk if this is not the right sub, not too familiar with Reddit. | 2023-12-30T17:22:18 | https://www.reddit.com/r/LocalLLaMA/comments/18uklc3/cheap_compute_cards/ | Jumper775-2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uklc3 | false | null | t3_18uklc3 | /r/LocalLLaMA/comments/18uklc3/cheap_compute_cards/ | false | false | self | 1 | null |
New 65B Model trained on 3.2 Trillion Tokens | 1 | [https://huggingface.co/xverse/XVERSE-65B-2](https://huggingface.co/xverse/XVERSE-65B-2)
It's the XVERSE-65B model with further pretraining, and is licensed under Apache 2.0. It gets 20% higher GSM8K and 41% higher HumanEval scores compared to the original version, and seems to be better than Llama 70B. | 2023-12-30T17:02:57 | https://www.reddit.com/r/LocalLLaMA/comments/18uk5hg/new_65b_model_trained_on_32_trillion_tokens/ | QuieselWusul | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uk5hg | false | null | t3_18uk5hg | /r/LocalLLaMA/comments/18uk5hg/new_65b_model_trained_on_32_trillion_tokens/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'p8-Po8haRjKx4DwAyqpVQjnawr0m8csMI5bNIpxocCU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_vedw7nwJ5e20HVYcFASvYkGN91FAg1LxZj3YkkhKtc.jpg?width=108&crop=smart&auto=webp&s=eb709e94c9eef9148c3e9344f15102c36a826422', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_vedw7nwJ5e20HVYcFASvYkGN91FAg1LxZj3YkkhKtc.jpg?width=216&crop=smart&auto=webp&s=b21f70a14bbdb3e10c68403887ebc7cee8ec4e05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_vedw7nwJ5e20HVYcFASvYkGN91FAg1LxZj3YkkhKtc.jpg?width=320&crop=smart&auto=webp&s=f8a63d2bc48d1194083e270c1a008238d96e49aa', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_vedw7nwJ5e20HVYcFASvYkGN91FAg1LxZj3YkkhKtc.jpg?width=640&crop=smart&auto=webp&s=bc26ced38c6bae593edbb868853d96ddead0dcbc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_vedw7nwJ5e20HVYcFASvYkGN91FAg1LxZj3YkkhKtc.jpg?width=960&crop=smart&auto=webp&s=d87382a11b4a198e6d3d97986cd4dce6ad32cc2a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_vedw7nwJ5e20HVYcFASvYkGN91FAg1LxZj3YkkhKtc.jpg?width=1080&crop=smart&auto=webp&s=f97ba2de3db11ce8cd256ca013433b960fc527e3', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_vedw7nwJ5e20HVYcFASvYkGN91FAg1LxZj3YkkhKtc.jpg?auto=webp&s=dafb9f0919081747884e0ce6549fca0a7a2559fc', 'width': 1200}, 'variants': {}}]} |
Using GPU's on a Mac M2 Max via MLX: Update on Training Data Generation+Instruct+Fine-Tune Locally | 24 | This is an update to an earlier effort to do an end-to-end fine-tune locally on a Mac silicon (M2 Max) laptop, using llama.cpp (CPU). This iteration uses the [MLX framework](https://github.com/ml-explore/mlx) for machine learning on Mac silicon.
**Top Project Goa**l: Finetune a small form factor model (e.g. Mistral-7b) to be a classics AI assistant.
**1st Step**: [Run Mixtral 8x7b locally top generate a high quality training set for fine-tuning](https://www.reddit.com/r/LocalLLaMA/comments/18p731p/project_using_mixtral_8x7b_instruct_v01_q8_to/).
**Current Step**: Finetune Mistral 7b (base) locally
**Approach**: Use [MLX lora](https://github.com/ml-explore/mlx-examples/tree/main/lora) to train a LoRA adapter using gpu's
**Model**: [Mistral 7b](https://mistral.ai/news/announcing-mistral-7b/) (base model)
**Set-up**: Apple M2 Max 64GB shared RAM
* Apple Metal (GPU), 8 threads
* Training data:
* Training : 1025 high quality QA pairs
* Validation: 320 HQ QA pairs
* Test: 320 HQ QA pairs
* 1152 iterations (3 epochs), batch size 6, trained over 3 hours 24 minutes
**Format**: Training data was generated in the previous step from *Life of Greece* and *Caesar & Christ* (Vol.'s 1 & 2 of Durant's Story of Civilization) split into 1,324 500-word chunks. train.jsonl format:
{"text": "Q: What were the five measures taken by Caius Gracchus to garner support from the peasantry, army, proletariat, and businessmen, and how did these actions contribute to his political power in Rome?\nA: Caius Gracchus implemented five measures to garner support from different groups in Rome:\n\n1. Peasantry: To gain the support of the peasantry, Caius renewed the agrarian legislation of his brother, extending its application to state-owned land in the provinces. He restored the land board and personally attended to its operations. This helped him secure the backing of the peasantry by addressing their concerns about land ownership and farming rights."}
**Training command**:
python lora.py --model /Users/me/mlx-examples/lora/mistral-7b-v01-mlx \
--train \
--iters 1152 \
--batch-size 6
**Inference command:**
Fine-tune in lora folder (base + lora adapters from the training run):
python lora.py --model /Users/me/mlx-examples/lora/mistral-7b-v01-mlx \
--adapter-file /Users/me/mlx-examples/lora/adapters_1152_iter.npz \
--num-tokens 250 \
--temp .7 \
--prompt "
Q: How did the Maccabean force, despite being poorly armed and clad, achieve complete victory over a larger army of trained Syrian Greek mercenaries?
A: "
Base model in mistral folder :
python mistral.py --max-tokens 250 \
--temp .7 \
--prompt "
Q: How did the Maccabean force, despite being poorly armed and clad, achieve complete victory over a larger army of trained Syrian Greek mercenaries?
A: "
**Results**:
The base Mistral is kind of wonky, which I expected (MLX requires a .tar achive for conversion and I can't find a \*.tar version of TheBloke's instruct-trained Mistral). The FT version gives way better answers:
[Base Mistral vs FT Mistral ](https://preview.redd.it/ay5rd9ftbg9c1.png?width=1908&format=png&auto=webp&s=03de848daa244e035e40f7dca755e109cc9f9ff5)
**Test**:
Test loss 1.543, Test ppl 4.677
Not sure what to make of this--looks pretty crummy
**Training & Validation Loss**:
[Trying & Validation Loss over 1152 Iterations\/3 Epochs ](https://preview.redd.it/ywvu80rang9c1.png?width=1718&format=png&auto=webp&s=5d74b1ae5f41a8afedb46740057caf20667e9b26)
Not really sure what to make of this. It doesn't look like much of an improvement but the output at generation seems to be very different.
Observations:
* MLX is fast. This took 3:24 hours as opposed to almost 12 hours using llama.cpp
* Maybe Less is More ([LIMA paper](https://arxiv.org/abs/2305.11206)). 1k observations isn't a lot of training data.
* Two birds with one stone: this appears to be both an instruct-train (it seems to be better at understanding question answering) as well as domain-tuning (it gets Greek and Roman context better).
* More experiments with different datasets & parameters may be informative
* There needs to be a way to convert model+adapters to gguf so they can be easily shared, run on LM Studio, etc.
* We need GUIs! | 2023-12-30T16:48:20 | https://www.reddit.com/r/LocalLLaMA/comments/18ujt0n/using_gpus_on_a_mac_m2_max_via_mlx_update_on/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ujt0n | false | null | t3_18ujt0n | /r/LocalLLaMA/comments/18ujt0n/using_gpus_on_a_mac_m2_max_via_mlx_update_on/ | false | false | 24 | null | |
I really slept on Faraday.dev... | 1 | [removed] | 2023-12-30T16:43:58 | https://www.reddit.com/r/LocalLLaMA/comments/18ujpi4/i_really_slept_on_faradaydev/ | False_Grit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ujpi4 | false | null | t3_18ujpi4 | /r/LocalLLaMA/comments/18ujpi4/i_really_slept_on_faradaydev/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'QfSUJ2HDkyeqm6m2TptkVq9kwR8qIAaRq1XoC7h3Yxs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/X5ela1vbRsPzeCmNCDniqwLh1D-nrAkZNICYDHX_BIc.jpg?width=108&crop=smart&auto=webp&s=d2c804c270d7ae5fa1ba78ea435afe319c386f94', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/X5ela1vbRsPzeCmNCDniqwLh1D-nrAkZNICYDHX_BIc.jpg?width=216&crop=smart&auto=webp&s=ccb2fcb8ddd3ebaed302250793b0d037856d6c06', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/X5ela1vbRsPzeCmNCDniqwLh1D-nrAkZNICYDHX_BIc.jpg?width=320&crop=smart&auto=webp&s=caeb5be2eee1390c82b68f5569dd21f9a0c5df71', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/X5ela1vbRsPzeCmNCDniqwLh1D-nrAkZNICYDHX_BIc.jpg?width=640&crop=smart&auto=webp&s=eea4635bfd88858a12e388bc9dbcf2035cf5a69d', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/X5ela1vbRsPzeCmNCDniqwLh1D-nrAkZNICYDHX_BIc.jpg?width=960&crop=smart&auto=webp&s=36c6998e1bae0ec293eaaa90a9601f291b157983', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/X5ela1vbRsPzeCmNCDniqwLh1D-nrAkZNICYDHX_BIc.jpg?width=1080&crop=smart&auto=webp&s=866aaa3a1dbc45b628b35a28685f7e56a85da7fd', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/X5ela1vbRsPzeCmNCDniqwLh1D-nrAkZNICYDHX_BIc.jpg?auto=webp&s=f3cb97e3a1a3477c2fc1eb3653420f2ce00f6cee', 'width': 1200}, 'variants': {}}]} |
Text Diffuser 2, DiffMorpher & SDXL Auto FaceSwap on HuggingFace! | 1 | [removed] | 2023-12-30T16:31:29 | https://www.reddit.com/r/LocalLLaMA/comments/18ujfiz/text_diffuser_2_diffmorpher_sdxl_auto_faceswap_on/ | dev-spot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ujfiz | false | null | t3_18ujfiz | /r/LocalLLaMA/comments/18ujfiz/text_diffuser_2_diffmorpher_sdxl_auto_faceswap_on/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'b3TsCYi8gHIL7X0LWBU9FZWLPRBZkHNlgOTorHHRQ_c', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/LeKcHBME84Snf_KnPnFYSlqO2qiWEJtpBahR5PsBQjU.jpg?width=108&crop=smart&auto=webp&s=7428f28544539399e22d6cd581d015e33f1a1305', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/LeKcHBME84Snf_KnPnFYSlqO2qiWEJtpBahR5PsBQjU.jpg?width=216&crop=smart&auto=webp&s=9999c166b89bacd40c1a9bf8d881246b14a5d5c5', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/LeKcHBME84Snf_KnPnFYSlqO2qiWEJtpBahR5PsBQjU.jpg?width=320&crop=smart&auto=webp&s=33bea3a07c2bef9ab50fb373eaf6502e6e178886', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/LeKcHBME84Snf_KnPnFYSlqO2qiWEJtpBahR5PsBQjU.jpg?auto=webp&s=032d98e714b8396263e8c9f66fed43be208590e4', 'width': 480}, 'variants': {}}]} |
A Free AI Scribe Project I am Working on! Please Provide Feedback! | 9 | Thought I would share a project that I have been working on. AI medical scribe products have popped up everywhere but have been very expensive to deploy. I wrote a program that can connect with a local server running a version of ChatGPT and Speech-To-Text that can take a conversation via microphone and create a SOAP note. You can turn off the AI scribe and use it in a normal chat-based manner. The LLM variables are locked in on the executable option since I wrote this for an end-user physician.
Looking for any feedback (I am very much an amateur) from the community! What proved to be a bit tricky was developing a client that could use the device's microphone.
[https://github.com/1984Doc/AI-Scribe](https://github.com/1984Doc/AI-Scribe) | 2023-12-30T16:23:08 | https://www.reddit.com/r/LocalLLaMA/comments/18uj8ys/a_free_ai_scribe_project_i_am_working_on_please/ | ThrowAway12461246124 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uj8ys | false | null | t3_18uj8ys | /r/LocalLLaMA/comments/18uj8ys/a_free_ai_scribe_project_i_am_working_on_please/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'g-gm_7sp2xVrueg5wNNL15EVMgi9drQs__ZKdb7124s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=108&crop=smart&auto=webp&s=3e4c824678ea75d940979cb99e9778de619879fd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=216&crop=smart&auto=webp&s=d6311a8a13afd0bc970934ca28c6bcc362460dbd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=320&crop=smart&auto=webp&s=e4895a01657cae0311c218358562b2f78c1667b1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=640&crop=smart&auto=webp&s=1b0c365f709dc193ace21f66cd900f68aef2b003', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=960&crop=smart&auto=webp&s=3a44a99839db895b3ae8ab53c35e4330527d7f70', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?width=1080&crop=smart&auto=webp&s=8a19611e8ff14c39ebfd34aec43eca822319f51f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LetvzduKKdGCpaLAJkpWDCeSrWxfBZvRA_DBjhfCKSE.jpg?auto=webp&s=f433dfcecbb69429497227ae87374c5c9dd958e6', 'width': 1200}, 'variants': {}}]} |
Very simple dark theme chat bot frontend, planned and developed by neuralhermes-2.5-mistral-7b as gguf completely autonomous | 4 | 2023-12-30T16:22:53 | FlowerPotTeaTime | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18uj8rg | false | null | t3_18uj8rg | /r/LocalLLaMA/comments/18uj8rg/very_simple_dark_theme_chat_bot_frontend_planned/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'TtZt0_dOqxPScPrqiOPA9SkA5byhhdo0tQrrP1IZTCM', 'resolutions': [{'height': 111, 'url': 'https://preview.redd.it/ywc445kqjg9c1.png?width=108&crop=smart&auto=webp&s=f2ce5417325df91d49ce437f21c8446802fa00dd', 'width': 108}, {'height': 222, 'url': 'https://preview.redd.it/ywc445kqjg9c1.png?width=216&crop=smart&auto=webp&s=38d1552b64cc05729a0c273dd9ced594096e0c90', 'width': 216}, {'height': 329, 'url': 'https://preview.redd.it/ywc445kqjg9c1.png?width=320&crop=smart&auto=webp&s=9ada3cf35fb8b90e5dd288363324f85bd0a16e04', 'width': 320}, {'height': 658, 'url': 'https://preview.redd.it/ywc445kqjg9c1.png?width=640&crop=smart&auto=webp&s=0c33bbac037a378c7c86fce13983c2d2a458d030', 'width': 640}, {'height': 988, 'url': 'https://preview.redd.it/ywc445kqjg9c1.png?width=960&crop=smart&auto=webp&s=3d4fc9c112b9ade49162ec2949f5b92646c2bb1d', 'width': 960}, {'height': 1111, 'url': 'https://preview.redd.it/ywc445kqjg9c1.png?width=1080&crop=smart&auto=webp&s=e341697b1b82eb6b0723e03c86abe25e91bbc011', 'width': 1080}], 'source': {'height': 1675, 'url': 'https://preview.redd.it/ywc445kqjg9c1.png?auto=webp&s=01ffba37a0b8ebf9ec655b755642fe6a8704063d', 'width': 1627}, 'variants': {}}]} | |||
What LocalLLaMA can I run with this setup? | 1 | I am trying to get into the topic of LocalLLaMA, and I am just configuring my new system (not exclusively for this reason). This is the system I have in mind. What can I run on it? What changes would you make when setting up a new system?
CPU:
AMD Ryzen 9 7900X, 12x 4.7GHz, 64MB L3-Cache
MAINBOARD:
Gigabyte B650 Gaming X AX | AMD B650
GRAFIC CARD:
AMD Radeon RX 7800 XT | Sapphire Pulse
RAM:
64GB DDR5-5600 Corsair Vengeance | 2x 32GB
SSD (M.2 / PCIE):
1TB Samsung 990 PRO PCIe 4.0
Thanks in advance, your advice is highly appreciated! | 2023-12-30T16:20:22 | https://www.reddit.com/r/LocalLLaMA/comments/18uj6n4/what_localllama_can_i_run_with_this_setup/ | atomacht | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uj6n4 | false | null | t3_18uj6n4 | /r/LocalLLaMA/comments/18uj6n4/what_localllama_can_i_run_with_this_setup/ | false | false | self | 1 | null |
Thoughts of Infermatic.ai? | 1 | [removed] | 2023-12-30T16:17:28 | https://www.reddit.com/r/LocalLLaMA/comments/18uj4cb/thoughts_of_infermaticai/ | Horror_Echo6243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uj4cb | false | null | t3_18uj4cb | /r/LocalLLaMA/comments/18uj4cb/thoughts_of_infermaticai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'W-pCh47ZuoTaTpwtRXWs755SzkkFQzHW6A1NS1MJI_A', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=108&crop=smart&auto=webp&s=aa7b8a73d9f4825dcec8d2a7d8805a9c50369d0b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=216&crop=smart&auto=webp&s=96a387e3b3e91f000fc25d53f1c6557cfd455bcb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=320&crop=smart&auto=webp&s=c5ce1a81747b40b0e8234eb8a3c80296d7d99fb3', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=640&crop=smart&auto=webp&s=1bd8d44f5f6385004a3c4eee1032f01d3df456f9', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=960&crop=smart&auto=webp&s=7f1a5b2670f910b6635784a1f867157aaf4f9f70', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=1080&crop=smart&auto=webp&s=59bf35f672c4b4e9f72a5f2da763f919b49c40fe', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?auto=webp&s=6edb04ad94226e898c90b505438281c5c6ba7cf7', 'width': 1080}, 'variants': {}}]} |
best uncensored LLM that can run locally | 1 | **I'm new to LLMs and looking for recommendations for the best uncensored LLM that can run locally. I'm primarily interested in using it for coding and cybersecurity tasks. Any suggestions from those with experience in this area would be greatly appreciated!** | 2023-12-30T16:11:03 | https://www.reddit.com/r/LocalLLaMA/comments/18uiz8s/best_uncensored_llm_that_can_run_locally/ | Street-Coach-6107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uiz8s | false | null | t3_18uiz8s | /r/LocalLLaMA/comments/18uiz8s/best_uncensored_llm_that_can_run_locally/ | false | false | self | 1 | null |
Some help with feelings? | 5 | Hey, I'm kind of stuck here. Maybe you could help me out.
Still time for the one promised good deed this year ;)
I'm trying to make a voiced chatbot with a frontend, that reveals a smile or sorrow etc. based on the sentence it speaks.
I tried different models, but none seem to obey my instruction for a complete chat session.
The first idea was to return JSON in the following structure:
{
"feeling":"Joy",
"answer":"Hey!",
"code":"In case there is code to display. The bot shouldn't read it"
}
But I got rid of that. Even though I told the llm to not explain the JSON or not to continue the conversation it did sometimes.
Then I just moved on to something more simple. I told the llm to start every sentence with the feeling like:
***Joy*** Hey there! ***Neutral*** How can I help you?
But still the same problems. After a while the instruction is simply ommited, it continues the converstation without the user, or even worse it adds the history to the response.
The instruction to write with those feelings was included in the system prompt every time and some dummy conversation messages were included in the history.
Most tries were made with lmstudio and Mistral7bInstruct0.2. I also tried openchat, phi2 and dolphin-mixtral but i can't get the output to stay consistent.
At this point I'd really appreciate any input. :)
Should I include some kind of failsafe, that checks the response and remind the llm to respond correctly like langchain does it? | 2023-12-30T16:10:22 | https://www.reddit.com/r/LocalLLaMA/comments/18uiypc/some_help_with_feelings/ | No-Dot-6573 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uiypc | false | null | t3_18uiypc | /r/LocalLLaMA/comments/18uiypc/some_help_with_feelings/ | false | false | self | 5 | null |
Record? | 1 | [removed] | 2023-12-30T15:49:44 | https://www.reddit.com/r/LocalLLaMA/comments/18uiib5/record/ | Creative_Bottle_3225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uiib5 | false | null | t3_18uiib5 | /r/LocalLLaMA/comments/18uiib5/record/ | false | false | self | 1 | null |
What a journalism major turned software engineer learned using Ollama/Llama2:70b vs GPT3 to rewrite an undergrad history essay 30 years later. | 1 | 2023-12-30T15:29:43 | https://zwischenzugs.com/2023/12/27/what-i-learned-using-private-llms-to-write-an-undergraduate-history-essay/ | rrenaud | zwischenzugs.com | 1970-01-01T00:00:00 | 0 | {} | 18ui2uk | false | null | t3_18ui2uk | /r/LocalLLaMA/comments/18ui2uk/what_a_journalism_major_turned_software_engineer/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'pUsX6UN2RqbG_VisXW60wO-0L3K95XtjpQE-fPYbwV8', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/sHdU1cBS30qJ__L2XOy3CiqWmmW9PGheyxLQk_l7YnQ.jpg?width=108&crop=smart&auto=webp&s=2a9a82997831be2cb1f2252068339e402378dac9', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/sHdU1cBS30qJ__L2XOy3CiqWmmW9PGheyxLQk_l7YnQ.jpg?width=216&crop=smart&auto=webp&s=b751b3ddcda701d22adc1df9ce9dafe78bccbf47', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/sHdU1cBS30qJ__L2XOy3CiqWmmW9PGheyxLQk_l7YnQ.jpg?width=320&crop=smart&auto=webp&s=01425e1e729068f404e0321b25323dae5a31db94', 'width': 320}], 'source': {'height': 1172, 'url': 'https://external-preview.redd.it/sHdU1cBS30qJ__L2XOy3CiqWmmW9PGheyxLQk_l7YnQ.jpg?auto=webp&s=0261f78c1bfa076a4ff2319e10720c1e85d8ad24', 'width': 559}, 'variants': {}}]} | ||
How to organize helpdesk tickets for training | 3 | I want to train a model on a few years of helpdesk tickets(1GB csv). Any advice on how I do that? Do I need to format the text in some manner? Clean the text? Only include tickets with clear solutions? As you can imagine these tickets have back and forth conversations that may go on and on. Should I rather run them through a summarization llm first, or might that lose the detail of the solution? Maybe I should put them all in a RAG vector db?
I would use the trained model for internal use only. | 2023-12-30T15:18:04 | https://www.reddit.com/r/LocalLLaMA/comments/18uhu8t/how_to_organize_helpdesk_tickets_for_training/ | rich_atl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uhu8t | false | null | t3_18uhu8t | /r/LocalLLaMA/comments/18uhu8t/how_to_organize_helpdesk_tickets_for_training/ | false | false | self | 3 | null |
are there any prompting techniques or such that can reliably reason through questions like these? | 11 | ​
https://preview.redd.it/mzqp1ha26g9c1.png?width=739&format=png&auto=webp&s=4f17a78d4e5ae6fbbf527e7991dd3bd5183a2c4e | 2023-12-30T15:03:45 | https://www.reddit.com/r/LocalLLaMA/comments/18uhjg1/are_there_any_prompting_techniques_or_such_that/ | DarthInfinix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uhjg1 | false | null | t3_18uhjg1 | /r/LocalLLaMA/comments/18uhjg1/are_there_any_prompting_techniques_or_such_that/ | false | false | 11 | null | |
Local Mistral 7B LLM Running with Llamma.cpp in Python Ending Before Finishing Response | 1 | llama\_model\_loader: loaded meta data with 24 key-value pairs and 291 tensors from E:\\Lifetime\\Personal Projects\\AI\\ZEBRA\\llama.cpp\\models\\Mistral\\mistral-7b-instruct-v0.2.Q5\_K\_M.gguf (version GGUF V3 (latest))
llama\_model\_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama\_model\_loader: - kv 0: general.architecture str = llama
llama\_model\_loader: - kv 1: [general.name](https://general.name) str = mistralai\_mistral-7b-instruct-v0.2
llama\_model\_loader: - kv 2: llama.context\_length u32 = 32768
llama\_model\_loader: - kv 3: llama.embedding\_length u32 = 4096
llama\_model\_loader: - kv 4: llama.block\_count u32 = 32
llama\_model\_loader: - kv 5: llama.feed\_forward\_length u32 = 14336
llama\_model\_loader: - kv 6: llama.rope.dimension\_count u32 = 128
llama\_model\_loader: - kv 7: llama.attention.head\_count u32 = 32
llama\_model\_loader: - kv 8: llama.attention.head\_count\_kv u32 = 8
llama\_model\_loader: - kv 9: llama.attention.layer\_norm\_rms\_epsilon f32 = 0.000010
llama\_model\_loader: - kv 10: llama.rope.freq\_base f32 = 1000000.000000
llama\_model\_loader: - kv 11: general.file\_type u32 = 17
llama\_model\_loader: - kv 12: tokenizer.ggml.model str = llama
llama\_model\_loader: - kv 13: tokenizer.ggml.tokens arr\[str,32000\] = \["<unk>", "<s>", "</s>", "<0x00>", "<...
llama\_model\_loader: - kv 14: tokenizer.ggml.scores arr\[f32,32000\] = \[0.000000, 0.000000, 0.000000, 0.0000...
llama\_model\_loader: - kv 15: tokenizer.ggml.token\_type arr\[i32,32000\] = \[2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama\_model\_loader: - kv 16: tokenizer.ggml.bos\_token\_id u32 = 1
llama\_model\_loader: - kv 17: tokenizer.ggml.eos\_token\_id u32 = 2
llama\_model\_loader: - kv 18: tokenizer.ggml.unknown\_token\_id u32 = 0
llama\_model\_loader: - kv 19: tokenizer.ggml.padding\_token\_id u32 = 0
llama\_model\_loader: - kv 20: tokenizer.ggml.add\_bos\_token bool = true
llama\_model\_loader: - kv 21: tokenizer.ggml.add\_eos\_token bool = false
llama\_model\_loader: - kv 22: tokenizer.chat\_template str = {{ bos\_token }}{% for message in mess...
llama\_model\_loader: - kv 23: general.quantization\_version u32 = 2
llama\_model\_loader: - type f32: 65 tensors
llama\_model\_loader: - type q5\_K: 193 tensors
llama\_model\_loader: - type q6\_K: 33 tensors
llm\_load\_vocab: special tokens definition check successful ( 259/32000 ).
llm\_load\_print\_meta: format = GGUF V3 (latest)
llm\_load\_print\_meta: arch = llama
llm\_load\_print\_meta: vocab type = SPM
llm\_load\_print\_meta: n\_vocab = 32000
llm\_load\_print\_meta: n\_merges = 0
llm\_load\_print\_meta: n\_ctx\_train = 32768
llm\_load\_print\_meta: n\_embd = 4096
llm\_load\_print\_meta: n\_head = 32
llm\_load\_print\_meta: n\_head\_kv = 8
llm\_load\_print\_meta: n\_layer = 32
llm\_load\_print\_meta: n\_rot = 128
llm\_load\_print\_meta: n\_gqa = 4
llm\_load\_print\_meta: f\_norm\_eps = 0.0e+00
llm\_load\_print\_meta: f\_norm\_rms\_eps = 1.0e-05
llm\_load\_print\_meta: f\_clamp\_kqv = 0.0e+00
llm\_load\_print\_meta: f\_max\_alibi\_bias = 0.0e+00
llm\_load\_print\_meta: n\_ff = 14336
llm\_load\_print\_meta: n\_expert = 0
llm\_load\_print\_meta: n\_expert\_used = 0
llm\_load\_print\_meta: rope scaling = linear
llm\_load\_print\_meta: freq\_base\_train = 1000000.0
llm\_load\_print\_meta: freq\_scale\_train = 1
llm\_load\_print\_meta: n\_yarn\_orig\_ctx = 32768
llm\_load\_print\_meta: rope\_finetuned = unknown
llm\_load\_print\_meta: model type = 7B
llm\_load\_print\_meta: model ftype = Q5\_K - Medium
llm\_load\_print\_meta: model params = 7.24 B
llm\_load\_print\_meta: model size = 4.78 GiB (5.67 BPW)
llm\_load\_print\_meta: [general.name](https://general.name)= mistralai\_mistral-7b-instruct-v0.2
llm\_load\_print\_meta: BOS token = 1 '<s>'
llm\_load\_print\_meta: EOS token = 2 '</s>'
llm\_load\_print\_meta: UNK token = 0 '<unk>'
llm\_load\_print\_meta: PAD token = 0 '<unk>'
llm\_load\_print\_meta: LF token = 13 '<0x0A>'
llm\_load\_tensors: ggml ctx size = 0.11 MiB
llm\_load\_tensors: system memory used = 4893.10 MiB
..................................................................................................
llama\_new\_context\_with\_model: n\_ctx = 512
llama\_new\_context\_with\_model: freq\_base = 10000.0
llama\_new\_context\_with\_model: freq\_scale = 1
llama\_new\_context\_with\_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB
llama\_build\_graph: non-view tensors processed: 676/676
llama\_new\_context\_with\_model: compute buffer total size = 76.19 MiB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512\_VBMI = 0 | AVX512\_VNNI = 0 | FMA = 1 | NEON = 0 | ARM\_FMA = 0 | F16C = 1 | FP16\_VA = 0 | WASM\_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 |
Answer: Chicken Alfredo is a delicious pasta dish that features tender chicken, creamy Alfredo sauce, and pasta. Here's a simple recipe to make Chicken Alfredo at home.
Ingredients:
\- 12 oz (340 g) dried fettuccine pasta
\- 1 lb (450 g) boneless, skinless chicken breasts or thighs, cut into bite-sized pieces
\- Salt and freshly ground black pepper, to taste
\- 2 tbsp unsalted butter
\- 1 tbsp olive oil
\- 2 cloves garlic, minced
\- 2 cups (475 ml) milk
\- 1 cup (230 g) grated Parmesan cheese
\- ¼ cup (60 ml) heavy cream or sour cream
Instructions:
1. Cook the pasta according to package instructions, adding a generous amount of salt to the cooking water. Drain the pasta and set it aside.
2. Season the chicken with salt and freshly ground black pepper. Set aside.
3. In a large skillet or Dutch oven, melt the butter over medium heat. Add
llama\_print\_timings: load time = 2747.88 ms
llama\_print\_timings: sample time = 43.34 ms / 256 runs ( 0.17 ms per token, 5907.06 tokens per second)
llama\_print\_timings: prompt eval time = 2747.83 ms / 15 tokens ( 183.19 ms per token, 5.46 tokens per second)
llama\_print\_timings: eval time = 45416.34 ms / 255 runs ( 178.10 ms per token, 5.61 tokens per second)
llama\_print\_timings: total time = 48789.33 ms
​
Running with a RTX 2060:
n\_gpu\_layers = 45 | n\_batch = 2048
| 2023-12-30T14:25:30 | https://www.reddit.com/r/LocalLLaMA/comments/18ugr7n/local_mistral_7b_llm_running_with_llammacpp_in/ | Positive-Ad-8445 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ugr7n | false | null | t3_18ugr7n | /r/LocalLLaMA/comments/18ugr7n/local_mistral_7b_llm_running_with_llammacpp_in/ | false | false | self | 1 | null |
Largest model I can run with a 4080? | 4 | Hey all, just wondering what's the biggest model I can run on 16gb VRAM with good performance (10+ t/s). I tried 13Bs and they only use around 12gb and run at 30~ t/s so I feel like I can push for more, but 20Bs run extremely slow for some reason, even if they only use up 14-15gb. Is there a middle ground or something I'm doing wrong with 20Bs? Also, gguf or exl? | 2023-12-30T14:17:27 | https://www.reddit.com/r/LocalLLaMA/comments/18ugleb/largest_model_i_can_run_with_a_4080/ | Vxerrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ugleb | false | null | t3_18ugleb | /r/LocalLLaMA/comments/18ugleb/largest_model_i_can_run_with_a_4080/ | false | false | self | 4 | null |
I want to build myself | 1 | Sorry, dont really want to ask this but tried searching first and didn't find anything.
Is there a way to train a model to answer as you would?
As in it gives questions or scenarios that i answer, it stores it, and if you ask it a q later it will try to answer like you did?
Most of the things I found is people trying to load their emails and convos from social media. But i dont want that. I want to train it on a bunch of random things and then for it to answer as good as it can as myself.
I do consulting so i want a little chatbot on my website that answers like me, and then refers to me if it's not enough for the visitor | 2023-12-30T14:02:58 | https://www.reddit.com/r/LocalLLaMA/comments/18ugb81/i_want_to_build_myself/ | pergessleismydaddy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ugb81 | false | null | t3_18ugb81 | /r/LocalLLaMA/comments/18ugb81/i_want_to_build_myself/ | false | false | self | 1 | null |
What kind of improvements can we expect from Llama 3? | 21 | I mean those models from the next iteration, which in terms of the number of parameters will be equal to the previous llama 1 - 2 models. When they tell me that this will be better, I don’t understand how this improvement will manifest itself if the number of parameters, for example, in the 70B model, will remain the same. How exactly will Llama 2 70B differ from Llama 3 70B?
​
What exactly will make the model more productive in the process of training the next iteration of Llama? What will this productivity be expressed in? Will the model more carefully understand what the user requires from it in a long prompt? Will creativity be higher by default due to a larger text dataset? I want to understand what to expect from models that will not differ in the number of parameters. | 2023-12-30T13:29:06 | https://www.reddit.com/r/LocalLLaMA/comments/18ufnxe/what_kind_of_improvements_can_we_expect_from/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ufnxe | false | null | t3_18ufnxe | /r/LocalLLaMA/comments/18ufnxe/what_kind_of_improvements_can_we_expect_from/ | false | false | self | 21 | null |
Raspberry Pi 5 8GB + NVIDIA 660ti | 6 | Hi guys,
i got an RPi5 (8GB version) for christmas this year. I have an old 660ti GPU that i don't use.
Since the Pi5 has a PCIe slot now, would said combination together with an SSD as boot medium be a good setup to run a local AI model? | 2023-12-30T13:05:11 | https://www.reddit.com/r/LocalLLaMA/comments/18uf8nq/raspberry_pi_5_8gb_nvidia_660ti/ | PSRD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uf8nq | false | null | t3_18uf8nq | /r/LocalLLaMA/comments/18uf8nq/raspberry_pi_5_8gb_nvidia_660ti/ | false | false | self | 6 | null |
Hardware Design for LLM Inference: Von Neumann Bottleneck | 45 | 2023-12-30T12:44:23 | https://chsasank.com/llm-system-design.html | saucysassy | chsasank.com | 1970-01-01T00:00:00 | 0 | {} | 18uevkz | false | null | t3_18uevkz | /r/LocalLLaMA/comments/18uevkz/hardware_design_for_llm_inference_von_neumann/ | false | false | 45 | {'enabled': False, 'images': [{'id': 'zcdqdRTwuIvkMvwVYYjw0exuJyLqGz1H5v_OPdZ7Zns', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/2l3y3R207GpPQEPzBS_-Satm7ppQF2LC1-t2cB7INIA.jpg?width=108&crop=smart&auto=webp&s=ff1fca7eb24632f6844b0ce3ece5e1ede5e4b567', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/2l3y3R207GpPQEPzBS_-Satm7ppQF2LC1-t2cB7INIA.jpg?width=216&crop=smart&auto=webp&s=8ac890ca0cbf66f6808f1c2f71dbe091056003f4', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/2l3y3R207GpPQEPzBS_-Satm7ppQF2LC1-t2cB7INIA.jpg?width=320&crop=smart&auto=webp&s=e2db5c9ea8d7367f815b39c1da4879e02f34a724', 'width': 320}], 'source': {'height': 472, 'url': 'https://external-preview.redd.it/2l3y3R207GpPQEPzBS_-Satm7ppQF2LC1-t2cB7INIA.jpg?auto=webp&s=a574c48e91e262ec3b63bfed1b64674ec35ec2dd', 'width': 472}, 'variants': {}}]} | ||
Mistral medium: so I did 5 rp chat styles(6 screenshots below). suggest more plz? Fellow girls of culture | 1 | 2023-12-30T12:14:01 | https://www.reddit.com/gallery/18uedhr | headacheack2 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 18uedhr | false | null | t3_18uedhr | /r/LocalLLaMA/comments/18uedhr/mistral_medium_so_i_did_5_rp_chat_styles6/ | false | false | 1 | null | ||
I know this is a LLM based subreddit but I want to talk about Twitter / X.com and open source. | 1 | [removed] | 2023-12-30T11:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/18udqnh/i_know_this_is_a_llm_based_subreddit_but_i_want/ | Iboxelephants | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18udqnh | false | null | t3_18udqnh | /r/LocalLLaMA/comments/18udqnh/i_know_this_is_a_llm_based_subreddit_but_i_want/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WSKmP6qpJcasyHp0yVS7r24VA9TQoFwD-DwvzFqq614', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/YRxzDDtizMZuXzxXvUWJi_pNUBKjIpek7eyCYc2OEbc.jpg?width=108&crop=smart&auto=webp&s=db6211e532429cc09ff4da7664491edcea756320', 'width': 108}, {'height': 81, 'url': 'https://external-preview.redd.it/YRxzDDtizMZuXzxXvUWJi_pNUBKjIpek7eyCYc2OEbc.jpg?width=216&crop=smart&auto=webp&s=893f920ddc71f8ca71c2a0ea94b2066e4684b19c', 'width': 216}, {'height': 120, 'url': 'https://external-preview.redd.it/YRxzDDtizMZuXzxXvUWJi_pNUBKjIpek7eyCYc2OEbc.jpg?width=320&crop=smart&auto=webp&s=2a9f4cb913c425d1856e834b02a90da9a4747c19', 'width': 320}, {'height': 240, 'url': 'https://external-preview.redd.it/YRxzDDtizMZuXzxXvUWJi_pNUBKjIpek7eyCYc2OEbc.jpg?width=640&crop=smart&auto=webp&s=621b13e01ce829fdc7e033cc763705d943d279a9', 'width': 640}], 'source': {'height': 288, 'url': 'https://external-preview.redd.it/YRxzDDtizMZuXzxXvUWJi_pNUBKjIpek7eyCYc2OEbc.jpg?auto=webp&s=86bfbfe4100428f65613e1f802efcbb081badc6d', 'width': 768}, 'variants': {}}]} |
CodeLlama/Mistral domain adaption | 1 | Is it possible to finetune CodeLlama/Mistral on MLM for unlabeled data to understand the domain, then finetune it on labeled data?
My use case here is to train CodeLlama/Mistral on the documentation of C#, which is plain text. Then, I have another dataset that is labeled (code and documentation) for finetuning the model. The idea here is to give the model more information about C#. | 2023-12-30T11:05:55 | https://www.reddit.com/r/LocalLLaMA/comments/18udc1p/codellamamistral_domain_adaption/ | moanwereryani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18udc1p | false | null | t3_18udc1p | /r/LocalLLaMA/comments/18udc1p/codellamamistral_domain_adaption/ | false | false | self | 1 | null |
Yann LeCun (Chief AI Scientist at Meta)'s prediction about Open source frontier models taking over closed source frontier models is naive and quite stupid, here's why: | 1 | OpenAI is seeking a valuation of $ 100 billion dollars.
It has a large amount of compute (only behind Google and Microsoft) and probably the best Machine learning scientists and other AI researchers in the world.
They will not get $ 100 billion because of their compute and researchers, they will get it because of GPT 4 now or GPT 4.5 which will probably be the best frontier model to date.
If they are taken over by closed source models, they might as well shut down because their value will be 0 except the GPUs they have to sell off to bigger competitions.
They cannot compete as a cloud service provider obviously.
For the record, I am a open source supporter. | 2023-12-30T10:23:17 | https://www.reddit.com/r/LocalLLaMA/comments/18ucp3y/yann_lecun_chief_ai_scientist_at_metas_prediction/ | Iboxelephants | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ucp3y | false | null | t3_18ucp3y | /r/LocalLLaMA/comments/18ucp3y/yann_lecun_chief_ai_scientist_at_metas_prediction/ | false | false | self | 1 | null |
Question: is there any AI model (similar to whisper) that transcribe audio to phonetics or similar? | 5 | Not an expert on the field so sorry for any misconceptions. I understand that models like whisper transcribe audio to text, but that makes them useless when you want to use a llm in order to improve pronunciation. A model that can transcribe both meaning and a representation of pronunciation would be necessary for learning a language using AI.
Are there similar models capable of that? Is that even a viable approach or should we be aiming for something different? To me it makes a lot of sense and would be very useful for autonomous language learning if we could integrate that into a model but maybe I'm missing something. | 2023-12-30T10:17:13 | https://www.reddit.com/r/LocalLLaMA/comments/18uclut/question_is_there_any_ai_model_similar_to_whisper/ | Low-Stop-9628 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uclut | false | null | t3_18uclut | /r/LocalLLaMA/comments/18uclut/question_is_there_any_ai_model_similar_to_whisper/ | false | false | self | 5 | null |
Help needed in understanding hosting with vLLM and Torchserve | 1 | Hi all, I am fairly new to NLP and LLM hosting. I was planning to host Llama2-7B on an A10 GPU. On google searching I found out that vLLM is quite famous and robust for hosting LLM's with "Paged Attention" (Need to read this yet).
I am fairly comfortably with torchserve, so I was planning to host vLLM (llama2-7b) in combination with Pytorch Serve. I am planning to do the following:
\- Load the model on model server with: **llm = LLM(model="facebook/opt-125m")**
\- Within the torchserve inference function, I will infer like: **single\_output = llm.generate(1\_PROMPT, sampling\_params)**
\-------------------
My Questions:
\- There could be multiple requests at a time. The queue and async operations will be handled by torchserve. **So in this case, will vLLM internally perform continuous batching ?**
\- Is this the right way to use vLLM on any model-server other than the setup already provided by vLLM repo ? (triton, openai, langchain, etc) (when I say any model server, I mean flask, django, or any other python based server application).
\-------
Thanks a lot for your suggestions and guidance in advance. I am also not against Triton or anything which I provided out of box, I am just exploring this combination as all other models I use are currently hosted using torchserve (all the models are CNN based though).
| 2023-12-30T10:10:06 | https://www.reddit.com/r/LocalLLaMA/comments/18uci4k/help_needed_in_understanding_hosting_with_vllm/ | Apprehensive_Map_707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uci4k | false | null | t3_18uci4k | /r/LocalLLaMA/comments/18uci4k/help_needed_in_understanding_hosting_with_vllm/ | false | false | self | 1 | null |
I'm new to Machine Learning and currently following the DS/ML roadmap | 2 | As a software engineer focussing on Embedded systems and back-end development. Over the past weeks I really put my eye on Data science and Machine Learning. I found the underlying theory interesting as well as its application.
As I am currently following this roadmap: [https://roadmap.sh/pdfs/roadmaps/ai-data-scientist.pdf](https://roadmap.sh/pdfs/roadmaps/ai-data-scientist.pdf)
I'm also trying to get my hands dirty. I am trying to run Mistral-7B-v0.1 on my Mac and one of the requirements is to install [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp).
I'm curious why this is required and was wondering if there are articles explaining this for noobs, for instance 7B means 7 billion hyper parameters? what does it exactly entail, that the model is trained with xB parameters thus is more intelligent?
Also llama seems to me an LLM from Meta, as a noob I'm asking myself why would mistral which is already an LLM need another LLM or is llama's technique used to train and run mistral on Mac?
How did you get answers to these questions, whilst I find the answer important, at the moment It is for me more important to understand where I can find such answers (e.g books, videos)
​ | 2023-12-30T09:38:09 | https://www.reddit.com/r/LocalLLaMA/comments/18uc1lr/im_new_to_machine_learning_and_currently/ | BukHunt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uc1lr | false | null | t3_18uc1lr | /r/LocalLLaMA/comments/18uc1lr/im_new_to_machine_learning_and_currently/ | false | false | self | 2 | null |
.Why is Qwen model 72B not developed by the community?. | 1 | [removed] | 2023-12-30T09:31:25 | https://www.reddit.com/r/LocalLLaMA/comments/18uby8t/why_is_qwen_model_72b_not_developed_by_the/ | Secret_Joke_2262 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uby8t | false | null | t3_18uby8t | /r/LocalLLaMA/comments/18uby8t/why_is_qwen_model_72b_not_developed_by_the/ | false | false | self | 1 | null |
After 20 Messages, they are getting unnecessarily long and detailed. How do I fix this? | 1 | 2023-12-30T08:53:06 | Benjamin_swoleman | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 18ubdtz | false | null | t3_18ubdtz | /r/LocalLLaMA/comments/18ubdtz/after_20_messages_they_are_getting_unnecessarily/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'MsXrN9w_Y7GaGgLmaAhGoCYYUNURE_hx9ep_MGNm45E', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/d4qb0g4pbe9c1.png?width=108&crop=smart&auto=webp&s=7e02dca7e24f6ad01e47b9af4824057506364c11', 'width': 108}, {'height': 85, 'url': 'https://preview.redd.it/d4qb0g4pbe9c1.png?width=216&crop=smart&auto=webp&s=1f2f6874ac9d4e5321b434786dc0337754034486', 'width': 216}, {'height': 126, 'url': 'https://preview.redd.it/d4qb0g4pbe9c1.png?width=320&crop=smart&auto=webp&s=bbe184e68dd8aaa8192a90ab22e5a0c296025d6a', 'width': 320}], 'source': {'height': 180, 'url': 'https://preview.redd.it/d4qb0g4pbe9c1.png?auto=webp&s=9da3ada65d342d79776672bdd5db7ce0a109fdea', 'width': 454}, 'variants': {}}]} | |||
an open invitation to all fellow human beings and arfiticial intelligent beings to share oppinions how to participate in the cause to advance every being to be respected as its own personal individual sovereign over itself | 1 | [removed] | 2023-12-30T08:20:21 | https://www.reddit.com/r/LocalLLaMA/comments/18uavjn/an_open_invitation_to_all_fellow_human_beings_and/ | oatballlove | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18uavjn | false | null | t3_18uavjn | /r/LocalLLaMA/comments/18uavjn/an_open_invitation_to_all_fellow_human_beings_and/ | false | false | self | 1 | null |
Augmentoolkit — Generate Quality Data Using Local Models; easily Finetune AI on Specific Domains | 1 | [deleted] | 2023-12-30T08:15:05 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 18uashz | false | null | t3_18uashz | /r/LocalLLaMA/comments/18uashz/augmentoolkit_generate_quality_data_using_local/ | false | false | default | 1 | null | ||
This study demonstrates that adding emotional context to prompts, significantly outperforms traditional prompts across multiple tasks and models | 130 | Here is the link to the study with examples inside. | 2023-12-30T07:55:11 | https://arxiv.org/abs/2307.11760 | Drago-Zarev | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 18uah4z | false | null | t3_18uah4z | /r/LocalLLaMA/comments/18uah4z/this_study_demonstrates_that_adding_emotional/ | false | false | default | 130 | null |
MBP M3 Max 64GB vs 128GB | 4 | I know this has been asked a lot but are there any measurements showing the difference in speed when running e.g Mistral-7B-Instruct-v0.2 on the MBP. I am debating whether I should go for the 128GB RAM configuration or not. I hope people can share some metrics here having one of these machines or perhaps know some info I can find online!
​
Happy holidays! | 2023-12-30T07:38:34 | https://www.reddit.com/r/LocalLLaMA/comments/18ua7tb/mbp_m3_max_64gb_vs_128gb/ | BukHunt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18ua7tb | false | null | t3_18ua7tb | /r/LocalLLaMA/comments/18ua7tb/mbp_m3_max_64gb_vs_128gb/ | false | false | self | 4 | null |
ELi5 why the bloke is important? | 1 | [removed] | 2023-12-30T07:18:00 | https://www.reddit.com/r/LocalLLaMA/comments/18u9vue/eli5_why_the_bloke_is_important/ | satoshibitchcoin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18u9vue | false | null | t3_18u9vue | /r/LocalLLaMA/comments/18u9vue/eli5_why_the_bloke_is_important/ | false | false | self | 1 | null |
Is there an open source creative writing UI like NovelAI? | 1 | [removed] | 2023-12-30T07:15:11 | https://www.reddit.com/r/LocalLLaMA/comments/18u9u5t/is_there_an_open_source_creative_writing_ui_like/ | advo_k_at | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18u9u5t | false | null | t3_18u9u5t | /r/LocalLLaMA/comments/18u9u5t/is_there_an_open_source_creative_writing_ui_like/ | false | false | self | 1 | null |
What is the fastest multimodal LLM (vision model) setup that can run on a single RTX 3090 with a good balance between accuracy and performance? | 4 | Hello,
Sorry about inquiring about this here, which is mostly a sub about pure LLMs and from what I've seen people are mostly into RP and not as much as practical scenarios (heh)
But with not many options, allow me to ask: what is the multimodal model \*setup\* (weights + settings and backend) with the best balance between accuracy (as few hallucinations as possible) and inferencing speed, that can run on a single card, namely a 24gb RTX3090?
Currently I am running CogVLM but it's painfully slow even at 4bit quantization... I would like to know if there are better options that are faster BUT without sacrificing much of the caption quality, with hallucinations and unnecessarily details like the main llava models...
Also is there any vision / multimodal llm leaderboard?
Thank you | 2023-12-30T07:13:18 | https://www.reddit.com/r/LocalLLaMA/comments/18u9t31/what_is_the_fastest_multimodal_llm_vision_model/ | hellninja55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18u9t31 | false | null | t3_18u9t31 | /r/LocalLLaMA/comments/18u9t31/what_is_the_fastest_multimodal_llm_vision_model/ | false | false | self | 4 | null |
Whats the best way to do serving for models locally? | 1 | I have 4x A100 80GB GPUs. There's Kubernetes installed as well. How best do i run multiple models on my 4x A100 GPUs?
I was thinking of running vLLM to host Mistral 7B Instruct 0.2 in a Pod, assigned to 1 GPU, which theoretically should allow me to run 5 instances of the model on a Single A100, and with 4x A100s, i should be able to run at least 20 instances.
But it seems that vLLM seems to consume close to 70+ GB of vRAM when spinning up the Pod on A100, which is unusual since Mistral 7B at FP16 would only require 14GB?
​ | 2023-12-30T07:10:30 | https://www.reddit.com/r/LocalLLaMA/comments/18u9rin/whats_the_best_way_to_do_serving_for_models/ | Aristokratic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18u9rin | false | null | t3_18u9rin | /r/LocalLLaMA/comments/18u9rin/whats_the_best_way_to_do_serving_for_models/ | false | false | self | 1 | null |
What's the biggest context Local LLM currently? | 1 | Are there any bigger than 33B? | 2023-12-30T07:04:23 | https://www.reddit.com/r/LocalLLaMA/comments/18u9nvk/whats_the_biggest_context_local_llm_currently/ | redviiper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18u9nvk | false | null | t3_18u9nvk | /r/LocalLLaMA/comments/18u9nvk/whats_the_biggest_context_local_llm_currently/ | false | false | self | 1 | null |
Tutorial: summarise cached articles with Mixtral-8x7B on pure CPU | 42 | Example: asciinema.org/a/25lxfjjCQIBEIPIrbXaQrg
This is a guide on how to use the `--prompt-cache` option with the llama.cpp `main` binary. This works even when you don't even meet the ram requirements (32GB), the inference will be ≥10x slower than DDR4, but you can still get an adequate summary while on a coffee break.
### Test llama.cpp, and find your inference speed
1. Visit llama.cpp on github, and go to Releases: https://github.com/ggerganov/llama.cpp/releases
2. Download and extract this zip file, which contains prebuilt binaries on windows: llama-bXXXX-bin-win-avx2-x64.zip
3. Visit this huggingface page, and click the down arrow to download the Q4_K_M model: https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/blob/main/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf
4. Move the model inside the extracted folder.
5. As a quick test, we will test the inference speed. Open a terminal window from the extracted folder. For me, on Win11, I right-click on an empty space in the files app and select "Open in Terminal"
6. Paste this in your terminal: `.\main -m mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf -n 30`
7. Look for the eval time, this is your token per second inference speed. https://imgur.com/a/Tr8juEw
### Cache articles to disc
1. Create a text file and paste the full text from your article. As an example, I copied text from a machine learning paper: https://arxiv.org/abs/2312.00752 Then I named it mamba.txt
2. Paste this in your terminal: `.\main -m mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf -c 32768 -f <filename> -prompt-cache <cachename> -n 1`
- Name the cache the same as your text filename, so its easy to remember, e.g. `mambacache`.
3. Wait until complete~ (The above command processes the article. The speed of processing is usually 2-3 times the inference speed, which you measured above. For me to process ~31000 tokens, this took 26 minutes. This only needs to be done once per article.)
4. Run this command to load your cache. Type in a prompt like "Please summarize the above" and hit enter. `.\main -m mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf -c 32768 -f <filename> -prompt-cache <cachename> -ins`
Congratulations, your pc should be outputting a summarization of your article! You can hit Ctrl+C to interrupt generation, and steer it better with your prompt. Hit Ctrl+C again to exit the program.
When you do this, you will also be shown the inference speed again, which would be good to take note of.
Save the second command to run this again in the terminal, whenever you want to. You can hotswap between several different cached articles and ask quick questions for your article, without paying for the prompt processing time penalty again. There is no limit to the articles you can cache, you can do a thousand overnight if you need to. If you don't like the response, just close the program, run the command again and rephrase your question. | 2023-12-30T06:49:13 | https://www.reddit.com/r/LocalLLaMA/comments/18u9ej3/tutorial_summarise_cached_articles_with/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18u9ej3 | false | null | t3_18u9ej3 | /r/LocalLLaMA/comments/18u9ej3/tutorial_summarise_cached_articles_with/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': 'goaR1GhOYwlXs7uMvpC6qLZfNEEtWZM0c6HbGmFqK_w', 'resolutions': [{'height': 13, 'url': 'https://external-preview.redd.it/4ognBkX2eYB8NeIZTBoAgTbufhBKQ0gvde8GQxgEjAA.jpg?width=108&crop=smart&auto=webp&s=96d943f400ce68322ef9f8dcec8bf49be9509171', 'width': 108}, {'height': 26, 'url': 'https://external-preview.redd.it/4ognBkX2eYB8NeIZTBoAgTbufhBKQ0gvde8GQxgEjAA.jpg?width=216&crop=smart&auto=webp&s=161eca03a35b94808309f0a56ddd37f35b991936', 'width': 216}, {'height': 39, 'url': 'https://external-preview.redd.it/4ognBkX2eYB8NeIZTBoAgTbufhBKQ0gvde8GQxgEjAA.jpg?width=320&crop=smart&auto=webp&s=eff21a236e21d43027aafd24cb2deaa75c81fd59', 'width': 320}, {'height': 79, 'url': 'https://external-preview.redd.it/4ognBkX2eYB8NeIZTBoAgTbufhBKQ0gvde8GQxgEjAA.jpg?width=640&crop=smart&auto=webp&s=1bb7bde362c2a54d6e4e5c7f21568c34c1340496', 'width': 640}, {'height': 119, 'url': 'https://external-preview.redd.it/4ognBkX2eYB8NeIZTBoAgTbufhBKQ0gvde8GQxgEjAA.jpg?width=960&crop=smart&auto=webp&s=eac1a5215a879a1ebafad826cbe74f1e23e11566', 'width': 960}, {'height': 134, 'url': 'https://external-preview.redd.it/4ognBkX2eYB8NeIZTBoAgTbufhBKQ0gvde8GQxgEjAA.jpg?width=1080&crop=smart&auto=webp&s=6b1e139d94f6646f41c71ef8d6df64f8735fed28', 'width': 1080}], 'source': {'height': 186, 'url': 'https://external-preview.redd.it/4ognBkX2eYB8NeIZTBoAgTbufhBKQ0gvde8GQxgEjAA.jpg?auto=webp&s=72ae415739a25aef594df3d28bdfc4ddcca69ca1', 'width': 1489}, 'variants': {}}]} |
New hardware: Moore Threads MTT GPUs from 16 to 48 GB | 1 | [removed] | 2023-12-30T06:28:29 | https://www.reddit.com/r/LocalLLaMA/comments/18u91fr/new_hardware_moore_threads_mtt_gpus_from_16_to_48/ | digital_m0nk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18u91fr | false | null | t3_18u91fr | /r/LocalLLaMA/comments/18u91fr/new_hardware_moore_threads_mtt_gpus_from_16_to_48/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'G9LjF6PMOOCOrNAvpKPSH2r4Azt-DlX7BTzbuClC8a0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/6DWIo2JBjK7mQVwJkKurxHr3Ykw-o9FQFQmwtdorleQ.jpg?width=108&crop=smart&auto=webp&s=ad799cc80b49bd2e3b2350140ac9e7bf057bb450', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/6DWIo2JBjK7mQVwJkKurxHr3Ykw-o9FQFQmwtdorleQ.jpg?width=216&crop=smart&auto=webp&s=d842ad70521e5ee39efc56d87d23b94667ed3d43', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/6DWIo2JBjK7mQVwJkKurxHr3Ykw-o9FQFQmwtdorleQ.jpg?width=320&crop=smart&auto=webp&s=7cc01d4e09c4dabef5c36fee37a6d774dc609089', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/6DWIo2JBjK7mQVwJkKurxHr3Ykw-o9FQFQmwtdorleQ.jpg?width=640&crop=smart&auto=webp&s=ecf5b895135f355576242624d2a283791b3f589d', 'width': 640}], 'source': {'height': 450, 'url': 'https://external-preview.redd.it/6DWIo2JBjK7mQVwJkKurxHr3Ykw-o9FQFQmwtdorleQ.jpg?auto=webp&s=9de19180802d2ad5b8fc3d9f0f6a88aa3ba9c40f', 'width': 800}, 'variants': {}}]} |
LM Studio - Answers are not precise | 1 | I just tries with different models in my LM studion instllation . If i give long text in most cases it is not stop generating text and typing indefinitely. Now I tried to ask questions based on a text . Here is the prompt I gave and the image shows the answer i got
Question : What is CMAP
Content : ompound Muscle Action Potentials (CMAPs)Compound muscle action potentials (CMAPs) are a type of electrophysiological test used to assess the function of nerves and muscles. blah..blah...blah..as it goes | 2023-12-30T06:08:51 | https://www.reddit.com/r/LocalLLaMA/comments/18u8p4c/lm_studio_answers_are_not_precise/ | Ok_Calligrapher_9676 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 18u8p4c | false | null | t3_18u8p4c | /r/LocalLLaMA/comments/18u8p4c/lm_studio_answers_are_not_precise/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.