title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Extracting only text from PDF
1
Sounds easy, but ONLY getting the written text without ANYTHING else seems to be surprisingly difficult (meaning tables, figures etc). I've tried a variety of conversions etc, but none of them do this very basic thing. They all try to do more, but in plain text its an absolute mess. Would be amazing if someone knows a good method! The best so far was adobe/pdf reader: manual copy paste. But then you have to watch out not to mark anything else than text... plus it just turned 2024 and i dont see myself doing that for 50 pages for one pdf (without sources) what I want to do is to have Mistal-medium create detailed notes of a paper. I just tried that with 4-turbo, and it worked ok, but the formatting was clearly giving it some trouble /caused confusion. (and waay to expensive for that) I've already created some manual notes for in context learning, that is about 4k tokens, so i'd just split the paper for mistral-medium. My script (made for sth else) however, uses plain text input, and most of the tables / graphs etc are not crucial to create notes. I've already looked for the question, but it seems most people want to extract ALL of the info properly formatted, not just the basic/plain text.
2024-01-04T20:59:34
https://www.reddit.com/r/LocalLLaMA/comments/18ynrve/extracting_only_text_from_pdf/
leschnoid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ynrve
false
null
t3_18ynrve
/r/LocalLLaMA/comments/18ynrve/extracting_only_text_from_pdf/
false
false
self
1
null
Are there any models that have been trained to be guided by JSON schema well?
1
Thank you
2024-01-04T20:53:56
https://www.reddit.com/r/LocalLLaMA/comments/18ynn1s/are_there_any_models_that_have_been_trained_to_be/
richardanaya
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ynn1s
false
null
t3_18ynn1s
/r/LocalLLaMA/comments/18ynn1s/are_there_any_models_that_have_been_trained_to_be/
false
false
self
1
null
llama 13b on raspberry pi - slow but still works. This has just opened up other modals for use on rasperry pi 4 x 8gb. Next experiment is to try getting auto-llama-cpp to run using this modal. Even if its slow it would be a great addition to my toolkit.
1
2024-01-04T19:43:07
https://i.redd.it/fjbtlobz7hac1.png
Purple_Session_6230
i.redd.it
1970-01-01T00:00:00
0
{}
18ylx1n
false
null
t3_18ylx1n
/r/LocalLLaMA/comments/18ylx1n/llama_13b_on_raspberry_pi_slow_but_still_works/
false
false
https://b.thumbs.redditm…3rmEteXzWDLM.jpg
1
{'enabled': True, 'images': [{'id': 'H4puecOm5tt4iIsZGyJ-VFtnrt3RlyRHHCq6L12Rw3U', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/fjbtlobz7hac1.png?width=108&crop=smart&auto=webp&s=5848f67ad29751451d1dcce0b8428a844e61d1ae', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/fjbtlobz7hac1.png?width=216&crop=smart&auto=webp&s=23669abc139e136815df5f4d9762e735ed70c36f', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/fjbtlobz7hac1.png?width=320&crop=smart&auto=webp&s=5871ba667d514c634aa9e10b3420a0011528b9c4', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/fjbtlobz7hac1.png?width=640&crop=smart&auto=webp&s=01fa52aeac6d6d82845c0f03ae24719dbb530d51', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/fjbtlobz7hac1.png?width=960&crop=smart&auto=webp&s=f64599cc3a113355d42f5efc73651319a813e678', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/fjbtlobz7hac1.png?width=1080&crop=smart&auto=webp&s=0d1a9352fff93ac77c37284c86aae249cf49191f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/fjbtlobz7hac1.png?auto=webp&s=5cff8cf0fd87a20951f94f1452a725bc452b5d8e', 'width': 1920}, 'variants': {}}]}
AI Alignment - Weak-to-strong generalisation (W2SG) explained:
1
2024-01-04T19:40:49
https://rnikhil.com/2024/01/04/ai-weak-strong-generalization-openai.html
Excellent-Effect237
rnikhil.com
1970-01-01T00:00:00
0
{}
18yluw2
false
null
t3_18yluw2
/r/LocalLLaMA/comments/18yluw2/ai_alignment_weaktostrong_generalisation_w2sg/
false
false
default
1
null
This guy, hu-po, does great easy-to-follow readings of LLM papers. I watch/listen to them while cooking/eating.
1
[removed]
2024-01-04T19:17:39
https://www.reddit.com/r/LocalLLaMA/comments/18ylali/this_guy_hupo_does_great_easytofollow_readings_of/
Nano_9a9o
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ylali
false
null
t3_18ylali
/r/LocalLLaMA/comments/18ylali/this_guy_hupo_does_great_easytofollow_readings_of/
false
false
self
1
{'enabled': False, 'images': [{'id': 'j6BRxt3UtniJIZMLEW_2Cbp1rcKVm8Fe67mhRsdIy44', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/hOSHD7NDHq1G6vsUHU25re6EMcmTShRFbeUDJMF4nAE.jpg?width=108&crop=smart&auto=webp&s=8bc7aa248115e5a6e63ea79f291450102202ea52', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/hOSHD7NDHq1G6vsUHU25re6EMcmTShRFbeUDJMF4nAE.jpg?width=216&crop=smart&auto=webp&s=90fb4ae9dbb91a6f8ddf179d86822cb5073fe396', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/hOSHD7NDHq1G6vsUHU25re6EMcmTShRFbeUDJMF4nAE.jpg?width=320&crop=smart&auto=webp&s=91d329a2da05fa1c5f78c4809f91e675b7c743c6', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/hOSHD7NDHq1G6vsUHU25re6EMcmTShRFbeUDJMF4nAE.jpg?auto=webp&s=9e0f2b077a86edfbfec97269fc40990a60433380', 'width': 480}, 'variants': {}}]}
Which 13b suitable for chatting?
1
[removed]
2024-01-04T19:16:02
https://www.reddit.com/r/LocalLLaMA/comments/18yl95w/which_13b_suitable_for_chatting/
Saihhold_Zhao
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yl95w
false
null
t3_18yl95w
/r/LocalLLaMA/comments/18yl95w/which_13b_suitable_for_chatting/
false
false
self
1
null
Guide for oogaboooga on amd using rocm gpu on linux ubuntu and fedora
1
Here's a guide to using ooogaboooga textui with an amd gpu on linux! Step 1: Installing rocm Get rocm libraries on [https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/quick-start.html](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/quick-start.html). Follow the basic instructions. For ubuntu only: (since most of you will be on it) sudo apt update wget [https://repo.radeon.com/amdgpu-install/6.0/ubuntu/jammy/amdgpu-install\_6.0.60000-1\_all.deb](https://repo.radeon.com/amdgpu-install/6.0/ubuntu/jammy/amdgpu-install_6.0.60000-1_all.deb) sudo apt install ./amdgpu-install\_6.0.60000-1\_all.deb sudo amdgpu-install --usecase=rocm sudo usermod -a -G render,video $LOGNAME For fedora only: the original installer is wack. I had to use manual install, and amdgpu-dkms would never install for some reason. However, it isn't needed! All you need is to get rocm-hip-libraries. So just follow the manual instal guide to get those libraries, and then install them. You do NOT need amdgpu-dkms, and I never found out how to get it working. sudo yum install rocm-hip-libraries --once this finishes, you are good! Don't forget to run this code below, or you will only be able to use rocm as root. sudo usermod -a -G render,video $LOGNAME also, run rocminfo to make sure it worked. Restart the computer! If you just added your user to the render and video section, then reboot or just log out and log back in. then see if rocminfo runs without root. If it does, continue. Step 2: Downloading Download oogabooga. First download git through your repository package manager if you havent already. Then, run git clone [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui).git cd text-generation-webui ./start\_linux.sh Press b to install for rocm, annd just wait for it to download like normal. ​ Step 3: Configure ooga. First, run rocminfo, and figure out what gfx version your card is. It should say agent, and then the name will say something like gfx1032 or gfx 1100 depending on your card. THIS IS IMPORTANT. remember your gfx version. Still in the ooga folder, look for one\_click.py and edit it. nano one\_click.py I think it is line 15, but there is a comment that says Remove the '# ' from the following lines as needed for your AMD GPU on Linux Beneath it there are a few lines of code that are commented out. Remove them, and insert these: os.environ\["ROCM\_PATH"\] = '/opt/rocm' os.environ\["HSA\_OVERRIDE\_GFX\_VERSION"\] = '10.3.0' os.environ\["HCC\_AMDGPU\_TARGET"\] = 'gfx1032' REPLACE THIS os.environ\["PATH"\] = '/opt/rocm/bin:$PATH' os.environ\["LD\_LIBRARY\_PATH"\] = '/opt/rocm/lib:$LD\_LIBRARY\_PATH' os.environ\["CUDA\_VISIBLE\_DEVICES"\] = '0' os.environ\["HCC\_SERIALIZE\_KERNEL"\] = '0x3' os.environ\["HCC\_SERIALIZE\_KERNEL"\]='0x3' os.environ\["HCC\_SERIALIZE\_COPY"\]='0x3' os.environ\["HIP\_TRACE\_API"\]='0x2' replace HCC\_AMDGPU\_TARGET gfx with YOUR GFX VERSION. Most likely you do not have the same type as me. These are setting variables, but since we saved them this way, we never have to set them again! Tbh, i don't know what half of them do. I do know that I need them in order to run. So use them too! And don't ask questions about them, because I can't answer them. Llama.ccp should in theory pick the correct gpu. I have two gpu's, and it picks the correct one. However, if it doesn't, you should be able to put os.environ\["HIP\_VISIBLE\_DEVICES"\] = '1' or maybe set it equal to 0, or 3. Who knows. You probably won't run into the error No Devices Found, but if you do try using that. ​ Step 4: Run it! Use ./start\_linux.sh, and it should all start just fine every time you do this. Make sure to offload layers to gpu and whatnot, just have fun. I had alot of issues with extensions, and none of the web search ones worked for me :/. Hopefully you guys have better luck though! ​ Let me know if you have any errors or issues. However, this should mostly cover it for oogabooga on linux with amd. ​ Finally: Credit to [u/Combinatorilliance](https://www.reddit.com/user/Combinatorilliance/) for their guide that originally helped me! Their guide is specific to llama.ccp, but I use parts of it aswell. Also credit to Mr.UserBox on discor, since he helped me find the right commands in the second half of this guide. [https://www.reddit.com/r/LocalLLaMA/comments/170tghx/guide\_installing\_rocmhip\_for\_llamacpp\_on\_linux/](https://www.reddit.com/r/LocalLLaMA/comments/170tghx/guide_installing_rocmhip_for_llamacpp_on_linux/)
2024-01-04T18:52:09
https://www.reddit.com/r/LocalLLaMA/comments/18yko0r/guide_for_oogaboooga_on_amd_using_rocm_gpu_on/
thesawyer7102
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yko0r
false
null
t3_18yko0r
/r/LocalLLaMA/comments/18yko0r/guide_for_oogaboooga_on_amd_using_rocm_gpu_on/
false
false
self
1
null
Asked Mixtral 8x7b to write a poem about Japan, instead it devolved into creepy self aware reflection
1
2024-01-04T18:11:18
https://i.redd.it/ekyk5vewrgac1.png
iamjaiyam
i.redd.it
1970-01-01T00:00:00
0
{}
18yjne4
false
null
t3_18yjne4
/r/LocalLLaMA/comments/18yjne4/asked_mixtral_8x7b_to_write_a_poem_about_japan/
false
false
https://b.thumbs.redditm…mk1f_eVYWYKE.jpg
1
{'enabled': True, 'images': [{'id': 'BvOY8DSrDotbBtmv7dlxyGAmtahbKtBR3eMhj_Jf8ok', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/ekyk5vewrgac1.png?width=108&crop=smart&auto=webp&s=e3bb49b777801e42c6da84428c98bcf23ef93748', 'width': 108}, {'height': 230, 'url': 'https://preview.redd.it/ekyk5vewrgac1.png?width=216&crop=smart&auto=webp&s=5411718ebca9bb9a869a24d250969cdaff1b08f2', 'width': 216}, {'height': 340, 'url': 'https://preview.redd.it/ekyk5vewrgac1.png?width=320&crop=smart&auto=webp&s=ab55d4578306622962f5b94cf54cba299134295f', 'width': 320}, {'height': 681, 'url': 'https://preview.redd.it/ekyk5vewrgac1.png?width=640&crop=smart&auto=webp&s=f8dd6d3658512a7371afe310b0cb9b491ddd2793', 'width': 640}, {'height': 1022, 'url': 'https://preview.redd.it/ekyk5vewrgac1.png?width=960&crop=smart&auto=webp&s=510591d85eadfa118a9af7f431f13f0a504bbffb', 'width': 960}, {'height': 1150, 'url': 'https://preview.redd.it/ekyk5vewrgac1.png?width=1080&crop=smart&auto=webp&s=52f25796defec9e8a22263cadd2544d96c917716', 'width': 1080}], 'source': {'height': 1840, 'url': 'https://preview.redd.it/ekyk5vewrgac1.png?auto=webp&s=0ce0fc6fa0d12ef45dbfbb847b9b1362dbcc5bc0', 'width': 1727}, 'variants': {}}]}
Inference throughput improvement
1
So I have a custom trained Vicuna 7B based model and for inference, I tried vLLM and TGI want to increase throughput more. I want it to server multiple requests simultaneously as server. I see my GPU memory and Utilisation hitting walls. With TGI since there is no batch endpoint, I'm spawning multiple threads and hitting the API. Are there better inference engines to try out? I was trying out optimum-nvidia/ powerinfer but facing issues. Ctranslate and llama.cpp not getting better results, infact less throughput than already mentioned. What am I doing wrong. Please suggest optimisation to check/ other engines to try out Running on Nvidia A6000
2024-01-04T17:35:51
https://www.reddit.com/r/LocalLLaMA/comments/18yishs/inference_throughput_improvement/
wafax69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yishs
false
null
t3_18yishs
/r/LocalLLaMA/comments/18yishs/inference_throughput_improvement/
false
false
self
1
null
Sqlcoder use case
1
Has anyone used sqloder in production or as part of their daily workflow? I tried using it with ollama, but it didn't provide the response I was looking for. Additionally, I want to fine-tune it, but I can't find any resources on how to do so without sacrificing its quality. Lastly, I also attempted to use RAG for chatting with my database but couldn't find any hints. What should I do to make SQLCoder perform at its best?
2024-01-04T17:27:08
https://www.reddit.com/gallery/18yiko2
laveriaroha
reddit.com
1970-01-01T00:00:00
0
{}
18yiko2
false
null
t3_18yiko2
/r/LocalLLaMA/comments/18yiko2/sqlcoder_use_case/
false
false
https://b.thumbs.redditm…1iFHlsmKVf5U.jpg
1
null
Had a bizzare encounter with Mira Murati of OpenAI yesterday.....
1
I saw Mira Murati at the Maybelline makeup store yesterday. I told her how cool it was to meet her in person, but I didn’t want to be a douche and bother her and ask her for photos or anything. She said, “Oh, like you’re doing now?” I was taken aback, and all I could say was “Huh?” but she kept cutting me off and going “huh? huh? huh?” and closing her hand shut in front of my face. I walked away and continued with my shopping, and I heard her chuckle as I walked off. When I came to pay for my stuff up front I saw her trying to walk out the doors with like fifteen makeup kits without paying. The girl at the counter was very nice about it and professional, and was like “Ma'am, you need to pay for those first.” At first she kept pretending to be tired and not hear her, but eventually turned back around and brought them to the counter. When the cashier took one of the makeup kit from her and started scanning it multiple times, Mira stopped her and told her to scan them each individually “to prevent any electrical infetterence,” and then turned around and winked at me. I don’t even think that’s a word. After she scanned each kit and put them in a bag and started to say the price, Mira kept interrupting her by yawning really loudly.
2024-01-04T17:24:04
https://www.reddit.com/r/LocalLLaMA/comments/18yihwb/had_a_bizzare_encounter_with_mira_murati_of/
TysonUsykFury
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yihwb
false
null
t3_18yihwb
/r/LocalLLaMA/comments/18yihwb/had_a_bizzare_encounter_with_mira_murati_of/
false
false
self
1
null
Super easy gguf llama inference on cpu with python - looking for colab and contributions
1
I'm looking for some developer who'd be interested in helping develop this simple project that tries to maximally simplify gguf models deployment on cpu to make this tech more accessible [https://github.com/laelhalawani/glai](https://github.com/laelhalawani/glai) it's a llama-ccp wrapper that simplifies use of llama based models. It features a built in ModelDB with json entries that can be used to automatically download and deploy quantized gguf models from hf. Then there are to classes AutoAI and EasyAI. The frist one takes min of 3 arguments including search query or path or url and max tokens and max input tokens. The later allows configuration of the model in few simple steps. There's a bunch of examples and detailed documentation already. The project is also published on pypi \`pip install glai\`
2024-01-04T17:21:46
https://www.reddit.com/r/LocalLLaMA/comments/18yify8/super_easy_gguf_llama_inference_on_cpu_with/
--lael--
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yify8
false
null
t3_18yify8
/r/LocalLLaMA/comments/18yify8/super_easy_gguf_llama_inference_on_cpu_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'TVkx8pexGCUquJwVr1f3QH_uB0VbKjxELBO3E2sQWXc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8-lvdewi6zlI-Xa_8cb_ZZleOFc_mnmTw7dYEf2PNQs.jpg?width=108&crop=smart&auto=webp&s=9b04f7d90ad26d5145307fa0429ac9f769e9a92e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8-lvdewi6zlI-Xa_8cb_ZZleOFc_mnmTw7dYEf2PNQs.jpg?width=216&crop=smart&auto=webp&s=eb5da3088219f2537a14e838059cc05a62934dfb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8-lvdewi6zlI-Xa_8cb_ZZleOFc_mnmTw7dYEf2PNQs.jpg?width=320&crop=smart&auto=webp&s=50efa9e359f548c2a312d30582e4781e3c1ffc4a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8-lvdewi6zlI-Xa_8cb_ZZleOFc_mnmTw7dYEf2PNQs.jpg?width=640&crop=smart&auto=webp&s=d3131eb59f1c079bda0965efd36a8c2f768022a6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8-lvdewi6zlI-Xa_8cb_ZZleOFc_mnmTw7dYEf2PNQs.jpg?width=960&crop=smart&auto=webp&s=ebe6326c0e88d56590a93b8ff3dcc84280d6ff8a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8-lvdewi6zlI-Xa_8cb_ZZleOFc_mnmTw7dYEf2PNQs.jpg?width=1080&crop=smart&auto=webp&s=56ee63a0294bbc6600233c95a8a9fe3240a94886', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8-lvdewi6zlI-Xa_8cb_ZZleOFc_mnmTw7dYEf2PNQs.jpg?auto=webp&s=3dfeac2bf3d2f0918d344fcc2212db910814b441', 'width': 1200}, 'variants': {}}]}
Streaming response from Llama 2
1
Anyone knows how to create streaming response from llama 2 running on colab with Langchain?
2024-01-04T17:11:47
https://www.reddit.com/r/LocalLLaMA/comments/18yi78c/streaming_response_from_llama_2/
RelationshipHater
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yi78c
false
null
t3_18yi78c
/r/LocalLLaMA/comments/18yi78c/streaming_response_from_llama_2/
false
false
self
1
null
LLM and their response to prompts
1
I've been intrigued by the diverse range of responses generated by different large language models (LLMs) when presented with the same prompts. I'm on the lookout for academic papers or research that specifically address and delve into the fascinating realm of variability in LLM outputs. Please share if anyone have reference. Thanks.
2024-01-04T16:51:10
https://www.reddit.com/r/LocalLLaMA/comments/18yhpf6/llm_and_their_response_to_prompts/
sunshine_010
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yhpf6
false
null
t3_18yhpf6
/r/LocalLLaMA/comments/18yhpf6/llm_and_their_response_to_prompts/
false
false
self
1
null
LLM suited for tabular data, mostly CSV and xls from finance and healthcare
1
Due to the nature of the files data, I'd like to know which local LLM would be best suited to answer questions on tabular data; max, min, grouped data, pivot equivalents on xls files; bonus if it can do charts too. I can clean up the column headers with good descriptive headers but what LLM can fuzz out automagically these basic questions every time.
2024-01-04T16:23:35
https://www.reddit.com/r/LocalLLaMA/comments/18yh1me/llm_suited_for_tabular_data_mostly_csv_and_xls/
10vatharam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yh1me
false
null
t3_18yh1me
/r/LocalLLaMA/comments/18yh1me/llm_suited_for_tabular_data_mostly_csv_and_xls/
false
false
self
1
null
Help in Creating RheoGPT
1
Hi everyone and happy new year! I'm a somewhat new member of the sub - as well as a beginner into the world of custom LLMs, with a couple of unconventional usecase ideas (at least from what people have been telling me). One of them is the creation of what I'm calling RheoGPT. This idea comes from an experiment with language proposed by theoretical physicist David Bohm (1980), with the purpose of develop such structures of language: >[“in which movement is to be taken as primary in our thinking and in which this notion will be incorporated into the language structure by allowing the verb rather than the noun to play a primary role” (Bohm 1980, 30).](https://www.researchgate.net/publication/255588769_The_Rheomode_of_Language_of_David_Bohm_Is_this_an_idea_without_a_precedent_in_the_history_of_thought) This would compose an entirely different way of expression when compared to our most common objective-oriented languages. And I'd love to see a LLM that natively incorporates such principles, so as to serve a native speaker and teacher of rheomode. But I'm unsure of the best path to achieve that. If it'd be the fine tuning of an existing model, the use of RAGs or even the training of a entirely new model from scratch (definitely the most challenging option). As of now I'm doing a simple test with a custom chatGPT, but it really struggles to ""get into character" and it server more as a commentator on what's rheomode (using conventional language structures), as opposed to a native speaker. I'd really appreciate any thoughts and tips into how I could move towards the creation of RheoGPT. Thank you very much!
2024-01-04T15:53:22
https://www.reddit.com/r/LocalLLaMA/comments/18ygbjj/help_in_creating_rheogpt/
dnllvrvz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ygbjj
false
null
t3_18ygbjj
/r/LocalLLaMA/comments/18ygbjj/help_in_creating_rheogpt/
false
false
self
1
null
Devs: What small building blocks would help you build better AI apps?
1
TL;DR: I want to work on some small python components that can be plugged into apps so we can build better tools easily. A couple days ago, I posted a demo of an always-on personal AI assistant. I'm excited to build the idea, but there are tons of little components needed to make it work, for example: * Flexible prompt chains that work like flowcharts: can maintain state, conditionally execute, trigger events, be debuggable, etc. * Context Manager to handle "unlimited" chat history, inject external/RAG data, and rank each to determine what should be added to a limited LLM context. * RAG that is more than just files. It could be files of course, but also could be to do list items, external events, search results, or current date/time/weather/etc. * SQLite tables with simple APIs to store chat histories, content of various types, etc. * A job queue to prioritize and run AI requests. * Standardized APIs so components of other apps could be used within another. Chat history is something I haven't implemented after building multiple small apps, so I'm focusing on the context manager right now. Then I plan to try to improve my prompt chain implementation to run on a backend. Are there some problems you encounter while developing that might be fixed in some of the AI frameworks, but you would like as a small pop-in library for your app? I'd like to try to build and open source some tiny components, under a name like ai\_tools or ai\_blocks, and maybe work together to make them easy to add to a project, customizable, and provide a better user experience with little work from developers.
2024-01-04T15:53:05
https://www.reddit.com/r/LocalLLaMA/comments/18ygbb8/devs_what_small_building_blocks_would_help_you/
AndrewVeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ygbb8
false
null
t3_18ygbb8
/r/LocalLLaMA/comments/18ygbb8/devs_what_small_building_blocks_would_help_you/
false
false
self
1
null
How do you keep LLM conversation memory?
2
Hi everyone! I'm developing a chatbot capable of track the entire history with a user. I'm currently using Goliath 120B, with Redis and langchain to return inside the prompt the latest 30 messages with the format USER: ASSISTANT:. Obviously, after 30 messages it completely forgets what the user said before, do you use any alternative to this that can keeps track of the entire history? I read about **ConversationSummaryBufferMemory and ConversationSummaryMemory,** what do you thin about them? Are they a valida alternative? Otherwise let me know if you have something else in mind! Thank you
2024-01-04T15:29:58
https://www.reddit.com/r/LocalLLaMA/comments/18yfsgw/how_do_you_keep_llm_conversation_memory/
Sapessiii
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yfsgw
false
null
t3_18yfsgw
/r/LocalLLaMA/comments/18yfsgw/how_do_you_keep_llm_conversation_memory/
false
false
self
2
null
Concept for portable setup
1
The goal is to create a portable, completely open source setup that is able to scale (meaning adding agents up to the hundreds, maybe even thousands). In the future it should use local GPUs, but in the beginning rented GPUs online. The context length should be extensive (like every book in LOTR universe). In the beginning it's only used to write stories and books. I don't have a good hardware, but I hope I can afford it within a year. For now the plan is to create an virtual machine (Ubuntu - i think the OS doesn't really matter, correct?) on an old I7 with 16GB RAM and a 10TB HDD (per USB). On this virtual machine I will use docker containers . I don't have any experience with docker, but I want get used to containers - or isn't that beneficial in this use case? Oobabooga will be used to handle different models (want to use Mistral 7b for writing, since it can be run in 24GB VRAM and I am looking to buy such a GPU next year) for different agents. MemGPT for the context length. Those are the first steps, not sure how I will continue from there on. Is this plan okay to get started? I appreciate any input. Thanks.
2024-01-04T15:26:47
https://www.reddit.com/r/LocalLLaMA/comments/18yfpym/concept_for_portable_setup/
RotjeCore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yfpym
false
null
t3_18yfpym
/r/LocalLLaMA/comments/18yfpym/concept_for_portable_setup/
false
false
self
1
null
Help with Linux
1
What would be some of the recommended software to installed (like must-haves) for someone on Linux?
2024-01-04T15:23:27
https://www.reddit.com/r/LocalLLaMA/comments/18yfn9p/help_with_linux/
Unprotectedtxt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yfn9p
false
null
t3_18yfn9p
/r/LocalLLaMA/comments/18yfn9p/help_with_linux/
false
false
self
1
null
Tesla P4 users - what is your favorite LLM to run?
1
I currently have (2) Tesla P4s in my server, along with enough system memory to run 13B models somewhat smoothly. So far my favorites have been Mistral 7B (inc. variations) & Wizard-Vicuna 7/13B models. How about you all?
2024-01-04T15:19:48
https://www.reddit.com/r/LocalLLaMA/comments/18yfkdf/tesla_p4_users_what_is_your_favorite_llm_to_run/
ziggo0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yfkdf
false
null
t3_18yfkdf
/r/LocalLLaMA/comments/18yfkdf/tesla_p4_users_what_is_your_favorite_llm_to_run/
false
false
self
1
null
Langchain streaming is broken with Local Hugging Face models
1
I get responses from my model, but only if I don't use the streaming=True parameter. I can also stream the model directly from my local server when I use curl, but not when I use langchain. Context to the issue here with code example: [https://github.com/langchain-ai/langchain/issues/15516](https://github.com/langchain-ai/langchain/issues/15516)
2024-01-04T15:15:12
https://www.reddit.com/r/LocalLLaMA/comments/18yfgng/langchain_streaming_is_broken_with_local_hugging/
HiddenMushroom11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yfgng
false
null
t3_18yfgng
/r/LocalLLaMA/comments/18yfgng/langchain_streaming_is_broken_with_local_hugging/
false
false
self
1
null
New SOTA Coding model: WizardCoder-33B-V1.1 (79.9% pass@1 on HumanEval)
1
from tweet: >🔥 Excited to release WizardCoder-33B-V1.1, the SOTA OSS Code LLM. > >🥇79.9% pass@1 on HumanEval, surpasses GPT3.5-Turbo, DeepSeek-Coder-33B-instruct, and Gemini Pro. > >🥇78.9% pass@1 on MBPP, comparable with GPT3.5-Turbo, surpasses DeepSeek-Coder-33B-instruct and Gemini Pro. > >**Model**: [https://huggingface.co/WizardLM/WizardCoder-33B-V1.1](https://huggingface.co/WizardLM/WizardCoder-33B-V1.1) **Github**: [https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder) Seems to be on par (slightly better than) with other OS model DeepSeek-Coder-33B-Instruct (but deepseek has permissive MIT License). Keep in mind that the new WizardCoder model has [**MSFTResearch**](https://huggingface.co/WizardLM/WizardMath-7B-V1.1/resolve/main/LICENSE) **non-commercial license.** [\(\*the deepseek license is in wrong row here\)](https://preview.redd.it/7guz9tjqrfac1.png?width=1192&format=png&auto=webp&s=8fa11d5e47a0f58521ce3a39f3039b4f3a2808f8)
2024-01-04T14:55:39
https://www.reddit.com/r/LocalLLaMA/comments/18yf14f/new_sota_coding_model_wizardcoder33bv11_799_pass1/
galambalazs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yf14f
false
{'oembed': {'author_name': 'WizardLM', 'author_url': 'https://twitter.com/WizardLM_AI', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">🔥 Excited to release WizardCoder-33B-V1.1, the SOTA OSS Code LLM.<br><br>🥇79.9% pass@1 on HumanEval, surpasses GPT3.5-Turbo, DeepSeek-Coder-33B-instruct, and Gemini Pro.<br><br>🥇78.9% pass@1 on MBPP, comparable with GPT3.5-Turbo, surpasses DeepSeek-Coder-33B-instruct and Gemini Pro.… <a href="https://t.co/Qs1YkeH5Qc">pic.twitter.com/Qs1YkeH5Qc</a></p>&mdash; WizardLM (@WizardLM_AI) <a href="https://twitter.com/WizardLM_AI/status/1742906065359167730?ref_src=twsrc%5Etfw">January 4, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/WizardLM_AI/status/1742906065359167730', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_18yf14f
/r/LocalLLaMA/comments/18yf14f/new_sota_coding_model_wizardcoder33bv11_799_pass1/
false
false
https://b.thumbs.redditm…Fw4hLum3Ou3Y.jpg
1
{'enabled': False, 'images': [{'id': 'iw0iJFgHAfydPWm6gsPHHsD_fT1fxvAFrZlkpieTsJY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9ZJPzt9IAEjk1d63xj7-THrRi_qd5MCvt-UmT2qIoLE.jpg?width=108&crop=smart&auto=webp&s=38464439842c837df4b270761d7cdbf1a714bd1f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9ZJPzt9IAEjk1d63xj7-THrRi_qd5MCvt-UmT2qIoLE.jpg?width=216&crop=smart&auto=webp&s=83a9061c32e63d500fe0724abfb8acf8baf4d270', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9ZJPzt9IAEjk1d63xj7-THrRi_qd5MCvt-UmT2qIoLE.jpg?width=320&crop=smart&auto=webp&s=b2ad986018d2d9403cde5c22e98ed85244481dc4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9ZJPzt9IAEjk1d63xj7-THrRi_qd5MCvt-UmT2qIoLE.jpg?width=640&crop=smart&auto=webp&s=86ac9c8ebe9aefb2f8069c591c329bb6e7aba499', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9ZJPzt9IAEjk1d63xj7-THrRi_qd5MCvt-UmT2qIoLE.jpg?width=960&crop=smart&auto=webp&s=ec67ba98ac03c5a4ef4a2047438bf1cfba52372a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9ZJPzt9IAEjk1d63xj7-THrRi_qd5MCvt-UmT2qIoLE.jpg?width=1080&crop=smart&auto=webp&s=7dedd2bc79097d57ed9e6f2e242d47c766d52969', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9ZJPzt9IAEjk1d63xj7-THrRi_qd5MCvt-UmT2qIoLE.jpg?auto=webp&s=306b2925d9bce6e2cded14f10914d700fe40da7c', 'width': 1200}, 'variants': {}}]}
Gibberish responses in LMStudio/Medalpaca
1
Medalpaca sounds like a very promising [model](https://huggingface.co/TheBloke/medalpaca-13B-GGUF). I'm usually able to pull GGUF's and run them with no problem, but this is talking nonsense. It's from TheBloke and I've tried a couple different quant levels. 64GB M1. Any ideas? https://preview.redd.it/8esvnkymsfac1.png?width=1560&format=png&auto=webp&s=96b80e65f335db903d567a37522a0abc75f4d4bd
2024-01-04T14:55:31
https://www.reddit.com/r/LocalLLaMA/comments/18yf116/gibberish_responses_in_lmstudiomedalpaca/
winkler1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yf116
false
null
t3_18yf116
/r/LocalLLaMA/comments/18yf116/gibberish_responses_in_lmstudiomedalpaca/
false
false
https://b.thumbs.redditm…421GMJi92nxo.jpg
1
{'enabled': False, 'images': [{'id': 'XE9_r5oCxlDYUA_CFHpeWp9x4S8Cgy0sWMGOlm5-w-M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ygMFjAah71tg_IdjGyzf4BKB0T8zXl3hn9J_JalH4JU.jpg?width=108&crop=smart&auto=webp&s=17bbf4e0228f7e522ca6e68252b1b57e50e2b6ed', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ygMFjAah71tg_IdjGyzf4BKB0T8zXl3hn9J_JalH4JU.jpg?width=216&crop=smart&auto=webp&s=cb52c81ac9d2fa2446734a4da92655f70677c86a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ygMFjAah71tg_IdjGyzf4BKB0T8zXl3hn9J_JalH4JU.jpg?width=320&crop=smart&auto=webp&s=d8e084790df7d7b71158bcf23abea6238ae93f21', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ygMFjAah71tg_IdjGyzf4BKB0T8zXl3hn9J_JalH4JU.jpg?width=640&crop=smart&auto=webp&s=4667fe8b63029262e81801e3a38210a9bf490083', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ygMFjAah71tg_IdjGyzf4BKB0T8zXl3hn9J_JalH4JU.jpg?width=960&crop=smart&auto=webp&s=4e60b63256bd0feda1442a8e8860c041a615eefe', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ygMFjAah71tg_IdjGyzf4BKB0T8zXl3hn9J_JalH4JU.jpg?width=1080&crop=smart&auto=webp&s=e472b9f9b0e61f32c0705e34477593fd9f735f7a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ygMFjAah71tg_IdjGyzf4BKB0T8zXl3hn9J_JalH4JU.jpg?auto=webp&s=c6cd5f90e03b3a1f37816b4d32a7a10a9ffa389b', 'width': 1200}, 'variants': {}}]}
Is there a framework that allows for multiple model backends?
1
I've been looking around at a lot of different LLM projects, and I know we are in a space where things are constantly evolving, but it's interesting to me that if there's a project that supports the workflow I want to play with, I have not seen it. What I'm thinking of is a using multiple sources along a pipeline. Like - give an input, and send it to a local 7b instance to try to do some token/context optimization 'summarize this text, and check against recent history for relevant messages. Then output that to gpt4' etc. Agent based approaches are interesting, but every one i've seen seems to be designed with a single LLM provider, local or cloud. If they can utilize multiple backends, the documentation i've read isn't making it super clear, so curious if anyone has seen this approach working on anything, or if there's a project i've missed out there.
2024-01-04T14:41:10
https://www.reddit.com/r/LocalLLaMA/comments/18yeps1/is_there_a_framework_that_allows_for_multiple/
moarmagic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yeps1
false
null
t3_18yeps1
/r/LocalLLaMA/comments/18yeps1/is_there_a_framework_that_allows_for_multiple/
false
false
self
1
null
Llama7b fine-tuning for factual data (very bad results)
1
I have a dataset of 72k factual questions and answers which I need to use to fine tune llama7b so that it can learn from them and answer user queries based on those questions. Example q/a: Where is the office headquarters located: Office headquarters are located at ... I had initially fine tuned the first version of the model with lesser data but the performance was horrible. I had an idea to generate multiple questions for the same q/a pair so that it can learn better. I did that for around 5k question for the v2 of the fine tuned model and the performance is still very bad (it's able to get ~10% of questions right) I have used Lora to do the fine tuning with both alpha and rank parameters as 128, dropout as 0.05 What am I doing wrong and please give me suggestions on how I can improve the fine tuned model so it can get atleast 50% of questions right. I am also open to using another llm/paras etc The final plan is to use rag+fine tunined model but we want to get to the point where the fine tuned model is not as horrible as it currently is I am using azure ML studio to do the fine tuning
2024-01-04T14:13:22
https://www.reddit.com/r/LocalLLaMA/comments/18ye4jl/llama7b_finetuning_for_factual_data_very_bad/
Godfather17131
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ye4jl
false
null
t3_18ye4jl
/r/LocalLLaMA/comments/18ye4jl/llama7b_finetuning_for_factual_data_very_bad/
false
false
self
1
null
Yet another LLM
1
I present Tenebra. An uncensored, "self aware" AI, with personality, skepticism and a sometimes hilarious self doubt. It's nothing special, but it can sometimes give you an eerie "*Sydney*" vibes. Use with caution. SicariusSicariiStuff/Tenebra\_30B\_Alpha01\_4BIT &#x200B; &#x200B; https://preview.redd.it/apjco0tkgfac1.png?width=877&format=png&auto=webp&s=2d4455491d9260a5018a3bcf664c458f7c1e753e
2024-01-04T13:45:17
https://www.reddit.com/r/LocalLLaMA/comments/18ydixs/yet_another_llm/
Sicarius_The_First
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ydixs
false
null
t3_18ydixs
/r/LocalLLaMA/comments/18ydixs/yet_another_llm/
false
false
https://b.thumbs.redditm…VOR5Is0ECJyE.jpg
1
null
New model : Noromaid v0.1 x Mixtral 8x7b
1
The Noromaid we know but better with all the qualities of Mixtral 8x7b, if you like storywriting or roleplay this is a really good model to start with your creations. And even more if you would like the outputs to be uncensored and that the answer is what you are really looking for I found it on [Infermatic.ai](https://Infermatic.ai) (free)
2024-01-04T13:11:27
https://www.reddit.com/r/LocalLLaMA/comments/18ycuhi/new_model_noromaid_v01_x_mixtral_8x7b/
Horror_Echo6243
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ycuhi
false
null
t3_18ycuhi
/r/LocalLLaMA/comments/18ycuhi/new_model_noromaid_v01_x_mixtral_8x7b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'W-pCh47ZuoTaTpwtRXWs755SzkkFQzHW6A1NS1MJI_A', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=108&crop=smart&auto=webp&s=aa7b8a73d9f4825dcec8d2a7d8805a9c50369d0b', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=216&crop=smart&auto=webp&s=96a387e3b3e91f000fc25d53f1c6557cfd455bcb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=320&crop=smart&auto=webp&s=c5ce1a81747b40b0e8234eb8a3c80296d7d99fb3', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=640&crop=smart&auto=webp&s=1bd8d44f5f6385004a3c4eee1032f01d3df456f9', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=960&crop=smart&auto=webp&s=7f1a5b2670f910b6635784a1f867157aaf4f9f70', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?width=1080&crop=smart&auto=webp&s=59bf35f672c4b4e9f72a5f2da763f919b49c40fe', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/IoUbWrubYyEjhlVchwRPNergZFyoEfUZRDayK06XMTM.jpg?auto=webp&s=6edb04ad94226e898c90b505438281c5c6ba7cf7', 'width': 1080}, 'variants': {}}]}
Would you be disappointed if GPT 5 is not substantially better than GPT 4?
1
Reasons why it should be substantially better: - more than a year of experience since ChatGPT launched - better promoting methods discovered - so much quality open source research they can build on - Better RAG - will be truly multimodal - newer chips from Nvidia - Synthetic datasets combined or made from real data work better than real data - some really nice additions to their research team - they would acquired better data sets - model should be bigger - Q Star hype (if its real) - longer context - possible long term memory - agents - Sam Altman and other cryptic accounts have been hyping everything so much I would be so disappointed if GPT 5 isn't substantially better in reasoning, maths and image generation than GPT 4. Atleast as much better as GPT 4 is to 3.5 If GPT 5 is indeed better, Open source model developers would be further incentivised to make their models better.
2024-01-04T13:03:13
https://www.reddit.com/r/LocalLLaMA/comments/18ycoqw/would_you_be_disappointed_if_gpt_5_is_not/
TysonUsykFury
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ycoqw
false
null
t3_18ycoqw
/r/LocalLLaMA/comments/18ycoqw/would_you_be_disappointed_if_gpt_5_is_not/
false
false
self
1
null
Text Generation Models for HTML Parsing and JSON Mapping?
1
The problem I'm trying to solve involves extracting form fields from HTML code, such as name, email, college name, and so on. Additionally, I need to map a JSON file containing both IDs and values to another JSON file with only IDs. Which models, such as GPT-3.5 Turbo, GPT-4, GPT-4 Turbo, and Llama 2, can I use to solve this task?
2024-01-04T12:59:56
https://www.reddit.com/r/LocalLLaMA/comments/18ycm1w/text_generation_models_for_html_parsing_and_json/
guna1o0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ycm1w
false
null
t3_18ycm1w
/r/LocalLLaMA/comments/18ycm1w/text_generation_models_for_html_parsing_and_json/
false
false
self
1
null
Connect your LLM to Slack within 5 minutes!
1
2024-01-04T12:58:43
https://v.redd.it/b0j42s2j6fac1
isac_yoo
v.redd.it
1970-01-01T00:00:00
0
{}
18ycl8d
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/b0j42s2j6fac1/DASHPlaylist.mpd?a=1706965137%2CNDJmZTM0ZDllMjExYzQ0MDEzNzc5YzE4MWViNjE0ZDY0OWZlNTJiMGI4YmE0NDExMjc5MTk2OTFlYWU4YWM0NQ%3D%3D&v=1&f=sd', 'duration': 40, 'fallback_url': 'https://v.redd.it/b0j42s2j6fac1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/b0j42s2j6fac1/HLSPlaylist.m3u8?a=1706965137%2COTFkMTQxMGVkMGQ3ZTJlOWY2YzNhNjcwYjc1NWUxNjkxMWZkZTZlZGIxOTk2MjZjZDBlNTA3MTBhNWQ3OGM1Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/b0j42s2j6fac1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_18ycl8d
/r/LocalLLaMA/comments/18ycl8d/connect_your_llm_to_slack_within_5_minutes/
false
false
https://external-preview…732e45de235544bb
1
{'enabled': False, 'images': [{'id': 'bnM3czJ5aTk3ZmFjMazivRA5OTZ0DgvInsVNoh4f25vNsizBALJc0Q3OInkv', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bnM3czJ5aTk3ZmFjMazivRA5OTZ0DgvInsVNoh4f25vNsizBALJc0Q3OInkv.png?width=108&crop=smart&format=pjpg&auto=webp&s=793f03670ff84969ef881182e4f03b4933b4c82d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bnM3czJ5aTk3ZmFjMazivRA5OTZ0DgvInsVNoh4f25vNsizBALJc0Q3OInkv.png?width=216&crop=smart&format=pjpg&auto=webp&s=f7e6dbbee4de85d57427799faab53422441c5675', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/bnM3czJ5aTk3ZmFjMazivRA5OTZ0DgvInsVNoh4f25vNsizBALJc0Q3OInkv.png?width=320&crop=smart&format=pjpg&auto=webp&s=4d6042351aef11027f358b3d4f49f90a6e690a8e', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/bnM3czJ5aTk3ZmFjMazivRA5OTZ0DgvInsVNoh4f25vNsizBALJc0Q3OInkv.png?width=640&crop=smart&format=pjpg&auto=webp&s=7a39417ef019ef0bf96bf00273bac1ca2f271723', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/bnM3czJ5aTk3ZmFjMazivRA5OTZ0DgvInsVNoh4f25vNsizBALJc0Q3OInkv.png?width=960&crop=smart&format=pjpg&auto=webp&s=17990b8db26052de7a59cd70fce0d7eda46f9a3a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bnM3czJ5aTk3ZmFjMazivRA5OTZ0DgvInsVNoh4f25vNsizBALJc0Q3OInkv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1451619ce389c9556dde68ae8f8a52dc43ad0fe1', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/bnM3czJ5aTk3ZmFjMazivRA5OTZ0DgvInsVNoh4f25vNsizBALJc0Q3OInkv.png?format=pjpg&auto=webp&s=cc8eee5ae5c1d95572efea42f8d19f295f5edd32', 'width': 1920}, 'variants': {}}]}
Jan.AI and choosing where models are stored?
1
Is there an easier way to run local LLaMA? If not, how do I set the directory for [Jan.AI](https://Jan.AI)? I tried regedit to change my default install location, I tried git clone the repo (couldn't find an exe), ... Halp?
2024-01-04T12:52:16
https://www.reddit.com/r/LocalLLaMA/comments/18ycgtc/janai_and_choosing_where_models_are_stored/
LucidFir
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ycgtc
false
null
t3_18ycgtc
/r/LocalLLaMA/comments/18ycgtc/janai_and_choosing_where_models_are_stored/
false
false
self
1
{'enabled': False, 'images': [{'id': '_clR5lo0uUzBmmOefgsOcrqCYpHgjkxrStKzWLzjtqg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=108&crop=smart&auto=webp&s=a34a9c017a9872303c87fdbe0bca0b95846bd110', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=216&crop=smart&auto=webp&s=67047021fa80720833499c426641e059a6e86bbc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=320&crop=smart&auto=webp&s=dcdb4bb0148442648e65b68e6865207cac0b2fc2', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=640&crop=smart&auto=webp&s=3d3bda8cadf2426facfc15184a6c812a572eb706', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=960&crop=smart&auto=webp&s=f614f9911aa52ab8c5e6e44dfaca8927167be907', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?width=1080&crop=smart&auto=webp&s=2a44840e4665263ae5e36cf6702fffebc2b295df', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/71HRL7bDHzSg5fARXgUVzQrb-ddlAEFhDMiNjCqLPcE.jpg?auto=webp&s=8e5e6d929427207a2483692d89844ab933417a5f', 'width': 1200}, 'variants': {}}]}
MicroModels: End to End Training of Speech Synthesis with 12 million parameter Mamba
1
[https://open.substack.com/pub/2084/p/2084-marcrandbot-speech-synthesis?r=brh1e&utm\_campaign=post&utm\_medium=web&showWelcome=true](https://open.substack.com/pub/2084/p/2084-marcrandbot-speech-synthesis?r=brh1e&utm_campaign=post&utm_medium=web&showWelcome=true) I was curious as to how well Mamba would perform for speech synthesis, so I wrote a post about how you can train a mamba based model for speech synthesis. The colab in the post contains the full code for training a Mamba model, you just need to change out the playlist\_url at the start. I'm honestly really pleased at how well micro models work for tasks - turns out you don't need that many parameters for a lot of tasks. If there's interest, I might do a music generation bot as a followup.
2024-01-04T12:27:44
https://www.reddit.com/r/LocalLLaMA/comments/18yc07b/micromodels_end_to_end_training_of_speech/
ExaminationNo8522
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yc07b
false
null
t3_18yc07b
/r/LocalLLaMA/comments/18yc07b/micromodels_end_to_end_training_of_speech/
false
false
self
1
{'enabled': False, 'images': [{'id': '4iDXfZi6XOc7-MV4usuDf0jvX9kEblrhQgJO4s_wL_E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/TkgxTNDh7DxverobqDJQELlZpjix8Uy3b72eStqY3hU.jpg?width=108&crop=smart&auto=webp&s=92ba1fd677cb9791ce836d0c2fa888ca553044e8', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/TkgxTNDh7DxverobqDJQELlZpjix8Uy3b72eStqY3hU.jpg?width=216&crop=smart&auto=webp&s=220dca20bb04c2d439b729aee51242b66353234c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/TkgxTNDh7DxverobqDJQELlZpjix8Uy3b72eStqY3hU.jpg?width=320&crop=smart&auto=webp&s=86aa0424622cc5785373d459817a4d7c4e5f77cb', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/TkgxTNDh7DxverobqDJQELlZpjix8Uy3b72eStqY3hU.jpg?width=640&crop=smart&auto=webp&s=80bffd54b11bb4bac61d92d085937ecd95e5f736', 'width': 640}], 'source': {'height': 410, 'url': 'https://external-preview.redd.it/TkgxTNDh7DxverobqDJQELlZpjix8Uy3b72eStqY3hU.jpg?auto=webp&s=1329b9a4e9edfe13bdb38289f20c4e2a1c1ed839', 'width': 728}, 'variants': {}}]}
CPU-only benchmark produces text at slow reading pace (~5 t/s)
1
./main -m pygmalion-2-13b.Q5_K_M.gguf --prompt "Once upon a time" -p 0 -n 128 llama_print_timings: load time = 515,12 ms llama_print_timings: sample time = 16,64 ms / 128 runs ( 0,13 ms per token, 7694,16 tokens per second) llama_print_timings: prompt eval time = 276,84 ms / 3 tokens ( 92,28 ms per token, 10,84 tokens per second) llama_print_timings: eval time = 25538,33 ms / 127 runs ( 201,09 ms per token, 4,97 tokens per second) llama_print_timings: total time = 25879,82 ms With my new work computer (no Graphics card) CPU=Intel© Core™ i9-14900K × 24 RAM=128 GB &#x200B;
2024-01-04T12:17:53
https://www.reddit.com/r/LocalLLaMA/comments/18ybtnn/cpuonly_benchmark_produces_text_at_slow_reading/
Full_Operation_9865
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ybtnn
false
null
t3_18ybtnn
/r/LocalLLaMA/comments/18ybtnn/cpuonly_benchmark_produces_text_at_slow_reading/
false
false
self
1
null
Which vector DB for persisting data
1
Which vector DB do people use for semantic search? Qdrant, Pinecone, Milvus, Marqo, Postgres plugins, ... I'm looking specifically for databases that have robust persistence support, e.g. the popular Chroma DB only has alpha support for persistence. What are pro/cons and things to look for in vector DBs? What are your experiences? &#x200B;
2024-01-04T12:03:04
https://www.reddit.com/r/LocalLLaMA/comments/18ybk8f/which_vector_db_for_persisting_data/
chrome___
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18ybk8f
false
null
t3_18ybk8f
/r/LocalLLaMA/comments/18ybk8f/which_vector_db_for_persisting_data/
false
false
self
1
null
New Model: qwen-1.8B-guanaco
1
[removed]
2024-01-04T12:02:07
https://huggingface.co/TinyPixel/qwen-1.8B-guanaco
Sufficient_Run1518
huggingface.co
1970-01-01T00:00:00
0
{}
18ybjl2
false
null
t3_18ybjl2
/r/LocalLLaMA/comments/18ybjl2/new_model_qwen18bguanaco/
false
false
https://b.thumbs.redditm…nRtSxA3VYFUo.jpg
1
{'enabled': False, 'images': [{'id': 'CtFkkdtPZlycCWllVh1OJ7XjcdkGKPybFboxzJX264M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zY8o3Jr-ihG1GewSdlxyFslStE4Hva60lDTG-iS99qM.jpg?width=108&crop=smart&auto=webp&s=433547fb29ae3253e2acfccda766cb2dc07c8208', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zY8o3Jr-ihG1GewSdlxyFslStE4Hva60lDTG-iS99qM.jpg?width=216&crop=smart&auto=webp&s=8597d15b043382b3337ba48f0d810074bc70457c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zY8o3Jr-ihG1GewSdlxyFslStE4Hva60lDTG-iS99qM.jpg?width=320&crop=smart&auto=webp&s=97f49db572d63c3d6bd38cdb681f6e02588a2ed8', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zY8o3Jr-ihG1GewSdlxyFslStE4Hva60lDTG-iS99qM.jpg?width=640&crop=smart&auto=webp&s=d92e152380c3595b33683edb1ce21ec75b6d576c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zY8o3Jr-ihG1GewSdlxyFslStE4Hva60lDTG-iS99qM.jpg?width=960&crop=smart&auto=webp&s=965cac9d36fbd548418f628fccfda41146f3c6f3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zY8o3Jr-ihG1GewSdlxyFslStE4Hva60lDTG-iS99qM.jpg?width=1080&crop=smart&auto=webp&s=0c2521241a800c1b61bda7508f4b427e11f630e0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zY8o3Jr-ihG1GewSdlxyFslStE4Hva60lDTG-iS99qM.jpg?auto=webp&s=cf91f0b20b377e47b322d042d109a516fbca5cf3', 'width': 1200}, 'variants': {}}]}
Iteratively synchronize git changes with faiss to use LLMs for chat and semantic search locally
1
2024-01-04T11:45:51
https://v.redd.it/crj5sbgkueac1
Fleischkluetensuppe
v.redd.it
1970-01-01T00:00:00
0
{}
18yb9dv
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/crj5sbgkueac1/DASHPlaylist.mpd?a=1706960767%2CMzhmMjQzNDZlYmU5MThhNjUxYTAzN2NlOTI4ZjI2ODllOTk3N2E4ODE5OWYxMjdjMjY0NmQyZjQ5MTc3NDY3Yw%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/crj5sbgkueac1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/crj5sbgkueac1/HLSPlaylist.m3u8?a=1706960767%2CMTM5M2NhM2Q5YmIwYmU3NDdjNTgzZTM2YTYyNDA3MGYzYTY3NzZjM2Y5OTFkMjdiMTg2ODdlYmNjNzEwMzk0Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/crj5sbgkueac1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 876}}
t3_18yb9dv
/r/LocalLLaMA/comments/18yb9dv/iteratively_synchronize_git_changes_with_faiss_to/
false
false
https://external-preview…a0ec01b9d428ab04
1
{'enabled': False, 'images': [{'id': 'djZlNmsxa2N2ZWFjMcdr_-oj80y4bsuac6-ehVUcxyrYyzYeAiBSfRNoFk_T', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/djZlNmsxa2N2ZWFjMcdr_-oj80y4bsuac6-ehVUcxyrYyzYeAiBSfRNoFk_T.png?width=108&crop=smart&format=pjpg&auto=webp&s=2db1c47244a73fed74b51b5353577f3ebbd955eb', 'width': 108}, {'height': 177, 'url': 'https://external-preview.redd.it/djZlNmsxa2N2ZWFjMcdr_-oj80y4bsuac6-ehVUcxyrYyzYeAiBSfRNoFk_T.png?width=216&crop=smart&format=pjpg&auto=webp&s=84dc5d07e04670d2107d92b77618026d062949f7', 'width': 216}, {'height': 263, 'url': 'https://external-preview.redd.it/djZlNmsxa2N2ZWFjMcdr_-oj80y4bsuac6-ehVUcxyrYyzYeAiBSfRNoFk_T.png?width=320&crop=smart&format=pjpg&auto=webp&s=5c8f13ac8fccd366640beb5f3bb75c998956f9f0', 'width': 320}, {'height': 526, 'url': 'https://external-preview.redd.it/djZlNmsxa2N2ZWFjMcdr_-oj80y4bsuac6-ehVUcxyrYyzYeAiBSfRNoFk_T.png?width=640&crop=smart&format=pjpg&auto=webp&s=2fc3dc731624229ee24e52736c98bce2e6766249', 'width': 640}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/djZlNmsxa2N2ZWFjMcdr_-oj80y4bsuac6-ehVUcxyrYyzYeAiBSfRNoFk_T.png?format=pjpg&auto=webp&s=6695a64a5ce7253aeaccd0223a5402f73496bc72', 'width': 876}, 'variants': {}}]}
Training on Work Emails?
1
Has anyone tried doing it? I suspect it is on GMail & Outlook's roadmaps, an LLM which I could ask about old emails and get references would be v valuable..
2024-01-04T10:46:34
https://www.reddit.com/r/LocalLLaMA/comments/18yaapw/training_on_work_emails/
johnorford
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18yaapw
false
null
t3_18yaapw
/r/LocalLLaMA/comments/18yaapw/training_on_work_emails/
false
false
self
1
null
vLLM on Windows PC
1
[removed]
2024-01-04T10:36:44
https://github.com/aneeshjoy/vllm-windows
AstrionX
github.com
1970-01-01T00:00:00
0
{}
18ya5c7
false
null
t3_18ya5c7
/r/LocalLLaMA/comments/18ya5c7/vllm_on_windows_pc/
false
false
https://b.thumbs.redditm…2eZCgzNRZA7k.jpg
1
{'enabled': False, 'images': [{'id': 'FsJYyfl4eD44aVKUW5di9PuVFcQCMcMe_XoXVmXhPNo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=108&crop=smart&auto=webp&s=794bbcca4f83011545bd89fa399f9a10be38463a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=216&crop=smart&auto=webp&s=6bc96177fabd4b1969689b9de3cf34bffbbaaec2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=320&crop=smart&auto=webp&s=bf8d72182157ad6d6071c5861dd08fca4532867c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=640&crop=smart&auto=webp&s=074d66f0c4beac28de49e61141e06297a8ea6be6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=960&crop=smart&auto=webp&s=b6f526e5236655e22d072b94d48827a25045b8ca', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=1080&crop=smart&auto=webp&s=2dc26a1f446a43ff193a9eaf277f1958c01904d5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?auto=webp&s=f98999ac99eea3fcb5bdee61a9360af44c9baba2', 'width': 1200}, 'variants': {}}]}
vLLM on Windows PC
1
[removed]
2024-01-04T10:34:42
https://github.com/aneeshjoy/vllm-windows
a4ai
github.com
1970-01-01T00:00:00
0
{}
18ya47x
false
null
t3_18ya47x
/r/LocalLLaMA/comments/18ya47x/vllm_on_windows_pc/
false
false
https://b.thumbs.redditm…2eZCgzNRZA7k.jpg
1
{'enabled': False, 'images': [{'id': 'FsJYyfl4eD44aVKUW5di9PuVFcQCMcMe_XoXVmXhPNo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=108&crop=smart&auto=webp&s=794bbcca4f83011545bd89fa399f9a10be38463a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=216&crop=smart&auto=webp&s=6bc96177fabd4b1969689b9de3cf34bffbbaaec2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=320&crop=smart&auto=webp&s=bf8d72182157ad6d6071c5861dd08fca4532867c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=640&crop=smart&auto=webp&s=074d66f0c4beac28de49e61141e06297a8ea6be6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=960&crop=smart&auto=webp&s=b6f526e5236655e22d072b94d48827a25045b8ca', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?width=1080&crop=smart&auto=webp&s=2dc26a1f446a43ff193a9eaf277f1958c01904d5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_m-S0oBFsjdKcKPgwD4xAS6xzkygCqWuwm9KB4v6GG8.jpg?auto=webp&s=f98999ac99eea3fcb5bdee61a9360af44c9baba2', 'width': 1200}, 'variants': {}}]}
Visualising LLM training compute & comparing to MMLU benchmark
1
Compute data from [https://epochai.org/blog/who-is-leading-in-ai-an-analysis-of-industry-ai-research](https://epochai.org/blog/who-is-leading-in-ai-an-analysis-of-industry-ai-research) Charts: 1. Re-basing compute as a percentage of GPT-4 training estimates - note the log scale. Data only available for a few models 2. Comparison of training vs MMLU benchmarks (collated from various reports) - correlation is 0.43 3. Comparison of performance vs basic trendline [Training compute as a percentage of GPT-4](https://preview.redd.it/pmf7gxuqbeac1.png?width=2096&format=png&auto=webp&s=09265318707b1e42c0acd1bc2accd7967cea6fba) [Comparison of compute to MMLU benchmark](https://preview.redd.it/zb7m0z8tbeac1.png?width=2096&format=png&auto=webp&s=622dae7650d34a2d1ec99f77da9cd9af90a9b604) [Comparison of model to expectation of benchmark performance vs basic trendline](https://preview.redd.it/eem3iwhubeac1.png?width=2130&format=png&auto=webp&s=457a6c7656dba14f6491846faf87e2e48fc75735)
2024-01-04T09:59:25
https://www.reddit.com/r/LocalLLaMA/comments/18y9ks2/visualising_llm_training_compute_comparing_to/
Time-Winter-4319
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y9ks2
false
null
t3_18y9ks2
/r/LocalLLaMA/comments/18y9ks2/visualising_llm_training_compute_comparing_to/
false
false
https://b.thumbs.redditm…m1R7c9SmD2Jk.jpg
1
null
Visualising LLM training compute & comparison to benchmarks
1
Compute data from [https://epochai.org/blog/who-is-leading-in-ai-an-analysis-of-industry-ai-research](https://epochai.org/blog/who-is-leading-in-ai-an-analysis-of-industry-ai-research) Charts: 1) Re-basing compute as a percentage of GPT-4 training estimates - note the log scale. Data only available for a few models 2) Comparison of training vs MMLU benchmarks (collated from various reports) 3) Comparison of performance vs basic trendline
2024-01-04T09:55:15
https://www.reddit.com/r/LocalLLaMA/comments/18y9in4/visualising_llm_training_compute_comparison_to/
Time-Winter-4319
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y9in4
false
null
t3_18y9in4
/r/LocalLLaMA/comments/18y9in4/visualising_llm_training_compute_comparison_to/
false
false
self
1
{'enabled': False, 'images': [{'id': 'JPDJcOh0PvKtFUIkFv5c77FDmejTrDu6N6ACn_no_Vo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MkWSBDUWsIGcL-sXSMnS5y-bTd8DfUU2mdN41s-DDaA.jpg?width=108&crop=smart&auto=webp&s=0b06f7b9125a746943dfe14a6e9b52f58131af37', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MkWSBDUWsIGcL-sXSMnS5y-bTd8DfUU2mdN41s-DDaA.jpg?width=216&crop=smart&auto=webp&s=bf0849430580b637ca47313554f8e46ac16fe4d6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MkWSBDUWsIGcL-sXSMnS5y-bTd8DfUU2mdN41s-DDaA.jpg?width=320&crop=smart&auto=webp&s=789fb6bacc83ab0346f4f399612f23d250d1f550', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MkWSBDUWsIGcL-sXSMnS5y-bTd8DfUU2mdN41s-DDaA.jpg?width=640&crop=smart&auto=webp&s=0e4c921a29b746d29dbdaddb4a0f1f85d0c14738', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MkWSBDUWsIGcL-sXSMnS5y-bTd8DfUU2mdN41s-DDaA.jpg?width=960&crop=smart&auto=webp&s=5fa0cee3b8f969fa3f699ae7dc73b1c005a2c941', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MkWSBDUWsIGcL-sXSMnS5y-bTd8DfUU2mdN41s-DDaA.jpg?width=1080&crop=smart&auto=webp&s=28fd3d8f2ba58e5d17a52c9db2d80bb159e92291', 'width': 1080}], 'source': {'height': 1359, 'url': 'https://external-preview.redd.it/MkWSBDUWsIGcL-sXSMnS5y-bTd8DfUU2mdN41s-DDaA.jpg?auto=webp&s=0109f09790d82e493129c67235689df5bba55cb6', 'width': 2415}, 'variants': {}}]}
Collaborative platform to train your models?
1
[removed]
2024-01-04T09:37:48
https://www.reddit.com/r/LocalLLaMA/comments/18y99am/collaborative_platform_to_train_your_models/
New_Detective_1363
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y99am
false
null
t3_18y99am
/r/LocalLLaMA/comments/18y99am/collaborative_platform_to_train_your_models/
false
false
self
1
null
Where's the comprehensive price table for LLMs / Cloud Providers comparison?
1
Hey everyone I'm a FS AI dev, and I was looking out for jupyter notebooks comparing different LLM input/output tokens combo for different models & cloud providers, both closed and open source. I was expecting this was a pretty standard practice for the gaziliion AI startups out there, so I was quite suprised to not find anything that easily. Is anyone aware of an excel sheet or jupyter notebook that compares different models and their respective end prices, depending on model? Im particularly looking for GPT4 Turbo, Claude, Llama65B, llama7B, mixtral, etc... And would like to have an easy plug and play for HF+AWS, HF+Inferless, HF+Azure, OpenAI, Athropic, etc... Thanks in advance!
2024-01-04T09:25:14
https://www.reddit.com/r/LocalLLaMA/comments/18y92mf/wheres_the_comprehensive_price_table_for_llms/
AlexandreFSR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y92mf
false
null
t3_18y92mf
/r/LocalLLaMA/comments/18y92mf/wheres_the_comprehensive_price_table_for_llms/
false
false
self
1
null
LLM RAG for ingesting a programming framework reference manual?
1
[removed]
2024-01-04T07:26:48
https://www.reddit.com/r/LocalLLaMA/comments/18y7bjx/llm_rag_for_ingesting_a_programming_framework/
Ben_Levitt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y7bjx
false
null
t3_18y7bjx
/r/LocalLLaMA/comments/18y7bjx/llm_rag_for_ingesting_a_programming_framework/
false
false
self
1
null
CodeBooga is currently the #1 model for Python and the #3 model for JS in the CanAiCode Leaderboard (vs 141 other models)
1
2024-01-04T06:22:02
https://i.redd.it/cn3x5bm89dac1.png
oobabooga4
i.redd.it
1970-01-01T00:00:00
0
{}
18y6aum
false
null
t3_18y6aum
/r/LocalLLaMA/comments/18y6aum/codebooga_is_currently_the_1_model_for_python_and/
false
false
https://a.thumbs.redditm…iUq0taPZcxd0.jpg
1
{'enabled': True, 'images': [{'id': '4HxUiTWPCW6NYxxuizc1PDxotPcQIheaPa80pG18nRE', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/cn3x5bm89dac1.png?width=108&crop=smart&auto=webp&s=c1fb0b7a2376d191372ad4a856bebd9499cff3fd', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/cn3x5bm89dac1.png?width=216&crop=smart&auto=webp&s=8ab864e3a10e6b3f896e22873aa1eda2d39f9f07', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/cn3x5bm89dac1.png?width=320&crop=smart&auto=webp&s=386c1be92364e3b92b71a8cd96b8e7b208f1adb8', 'width': 320}, {'height': 349, 'url': 'https://preview.redd.it/cn3x5bm89dac1.png?width=640&crop=smart&auto=webp&s=b9282589aea93916f27105639ca241fa6bd0bf98', 'width': 640}, {'height': 524, 'url': 'https://preview.redd.it/cn3x5bm89dac1.png?width=960&crop=smart&auto=webp&s=d43bc12ce2b708b70fb1b728b11e9779639c8ec8', 'width': 960}], 'source': {'height': 537, 'url': 'https://preview.redd.it/cn3x5bm89dac1.png?auto=webp&s=5545155a19935304343ce17ecd43a8295b9c6fd5', 'width': 983}, 'variants': {}}]}
Fine-Tuning 12 TinyLlamas and Shoving Them in a Clown Car
1
Has anyone thought about this? With mergekit it should be possible. Considering the very early examples, even moe merging a model with itself supposedly increases the performance. One tuned for math, one for writing, another for RAG... Then moe merged all together with positive-negative prompts.
2024-01-04T06:01:50
https://www.reddit.com/r/LocalLLaMA/comments/18y5ydj/finetuning_12_tinyllamas_and_shoving_them_in_a/
xadiant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y5ydj
false
null
t3_18y5ydj
/r/LocalLLaMA/comments/18y5ydj/finetuning_12_tinyllamas_and_shoving_them_in_a/
false
false
self
1
null
Regression on last hidden state?
1
I am trying to use LLama for a regression problem. Rather than trying to predict the number in text, i trained an additional linear layer on top of the decoder (along side the output embedding layer). The input to the linear layer is basically the last position of the decoded hidden state, however i am noticing a huge difference between training and inference MSE, where the mse is much higher during inference. Wonder if anyone have experience in this and able to give some advice. My suspicion is that since teacher forcing is used during training, there could be quite a large discrepancy between the inference and training (since inference is autoregressive, the hidden state progression towards the last pos changes every prior token), in this case taking the mean rather than last might be better?
2024-01-04T04:31:33
https://www.reddit.com/r/LocalLLaMA/comments/18y4a3j/regression_on_last_hidden_state/
nohodlnodough
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y4a3j
false
null
t3_18y4a3j
/r/LocalLLaMA/comments/18y4a3j/regression_on_last_hidden_state/
false
false
self
1
null
Question about training a model
1
I would like to train a model to be an expert at writing stable diffusion prompts. What would be the best LLM to train and what would be the best trading method? I was thinking a Lora but I’m not sure.
2024-01-04T04:28:48
https://www.reddit.com/r/LocalLLaMA/comments/18y4861/question_about_training_a_model/
TheHobbyistHacker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y4861
false
null
t3_18y4861
/r/LocalLLaMA/comments/18y4861/question_about_training_a_model/
false
false
self
1
null
UPWORK =/= Local LLMs, RAG, LLaMaIndex
1
I want to do something on a small scale as a POC. * RAG implementation via LLaMaIndex * I have tabular databases (csv) and also a handful of PDF docs that are somewhat complex (mathematic characters that matter mixed in with simple text and tons of footnotes and embedded hyperlinks). Many of the PDFs contain appendices that have instructions for building programs/functions which I intend to explore/build with an LLM's assistance (the pdfs and tabular data have contextual relevance, of course). The corpus of data is very clean, VERY thorough, and I have tonnes of validation data for everything I want to build. What I DON'T want to do is train GPT to do it. I want to keep it all proprietary. I assume I will have a lot of work to do to get all those PDF docs in markdown format and the mathematic characters are critical, so maybe pdf2htmlEX is going to be my solution there (I've done quite a bit of that already). * train / fine tune an inference model so that I prompt the LLM as well as possible. * Phase I would use GPT-4 as the LLM via an API call and Phase I.a. would be an open source model and Phase II would be to train/fine-tune an open source LLM, if necessary. My issue is that when I go on Upwork, I find precious few people who have done anything like this, at scale or as a POC. There are a couple of folks in the $200-300/hr range, and maybe that's where I need to land. I want to invest in the POC to run on a M3 Macbook, but it could be cloud based if that is a deal-breaker. If the experiment proves out, scaling this up should be relatively easy, as much of the architecture of the POC should scale. Please tell me how Dunning Kruger is owning me here.
2024-01-04T04:19:40
https://www.reddit.com/r/LocalLLaMA/comments/18y41rg/upwork_local_llms_rag_llamaindex/
knob-0u812
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y41rg
false
null
t3_18y41rg
/r/LocalLLaMA/comments/18y41rg/upwork_local_llms_rag_llamaindex/
false
false
self
1
null
Epistemology: A simple and clear way of hosting llama.cpp as a private HTTP API using Rust
1
2024-01-04T04:09:01
https://github.com/richardanaya/epistemology/
richardanaya
github.com
1970-01-01T00:00:00
0
{}
18y3u5y
false
null
t3_18y3u5y
/r/LocalLLaMA/comments/18y3u5y/epistemology_a_simple_and_clear_way_of_hosting/
false
false
https://b.thumbs.redditm…KPYzEeVf4uIo.jpg
1
{'enabled': False, 'images': [{'id': 'AEe6iDIWF0lc3ZyhiBCU6_10cAwO6nRbDIp4kwDUZHs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Hbts-QtoEIcry72P8le01tGIOtbOkRt-pY22ssO5iZE.jpg?width=108&crop=smart&auto=webp&s=2b3a8dd546cae4b4cb980db1d2d8cd89c0006e08', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Hbts-QtoEIcry72P8le01tGIOtbOkRt-pY22ssO5iZE.jpg?width=216&crop=smart&auto=webp&s=bc842edab15277b956d097839a041028b450c44e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Hbts-QtoEIcry72P8le01tGIOtbOkRt-pY22ssO5iZE.jpg?width=320&crop=smart&auto=webp&s=31aeacb3fb35c2c662db4b52eb3a9de417f7800b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Hbts-QtoEIcry72P8le01tGIOtbOkRt-pY22ssO5iZE.jpg?width=640&crop=smart&auto=webp&s=9aabe1051bf553908429b60bf8cb947163b2794e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Hbts-QtoEIcry72P8le01tGIOtbOkRt-pY22ssO5iZE.jpg?width=960&crop=smart&auto=webp&s=901b00ee666ea75ad261c45b9f57d03556dc0b37', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Hbts-QtoEIcry72P8le01tGIOtbOkRt-pY22ssO5iZE.jpg?width=1080&crop=smart&auto=webp&s=0cd4c4884ba757f7dc3b68f4e167d3ed852ede53', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Hbts-QtoEIcry72P8le01tGIOtbOkRt-pY22ssO5iZE.jpg?auto=webp&s=378f427903cc1493cf71f0d221f98c832da10e57', 'width': 1200}, 'variants': {}}]}
AgentSearch (free API) looking for beta testers!
1
Hey all, We have been working on search, we recently open sourced a [fine-tuned model](https://huggingface.co/SciPhi/Sensei-7B-V1) here that was trained on several hundred million tokens of synthetic search data. In conjunction with this, I'm working on putting out a very large 4-TB [paired semantic search engine + dataset here](https://huggingface.co/datasets/SciPhi/AgentSearch-V1). I have undertaken this work because I believe these resources will be a big value add for the open source community - small models + vector search engines are the future, imo. I have found the setup to be quite good, you can take it for a quick spin at [search.sciphi.ai](https://search.sciphi.ai). It would be great to get some feedback from the community if any of you have tried it. Anyway, I've put out a free beta for the API. It allows you to connect generate search RAG completions directly with Sensei + AgentSearch, and Bing Search. It will likely stay free for individual dev sized workloads. It would be a great if some of you could try some of these new tools out and let me know what you think!
2024-01-04T03:38:42
https://www.reddit.com/r/LocalLLaMA/comments/18y387o/agentsearch_free_api_looking_for_beta_testers/
docsoc1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y387o
false
null
t3_18y387o
/r/LocalLLaMA/comments/18y387o/agentsearch_free_api_looking_for_beta_testers/
false
false
self
1
{'enabled': False, 'images': [{'id': 'A3ZGJzJlmG9-tM0wFAmMVPyEqOoZpVB3ookgW14yr24', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3LzmMwlJvr7bwOxopUccVMUdqLOKOOFdU3D5-IVddZY.jpg?width=108&crop=smart&auto=webp&s=7478a81484a1b68fa96f829f68473400c1dc00c6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3LzmMwlJvr7bwOxopUccVMUdqLOKOOFdU3D5-IVddZY.jpg?width=216&crop=smart&auto=webp&s=ac46e82814d0310870f36c2f5b95500e8a1b6b96', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3LzmMwlJvr7bwOxopUccVMUdqLOKOOFdU3D5-IVddZY.jpg?width=320&crop=smart&auto=webp&s=21b3a3abb5a6eba166256f2ebebd82f8f290a7e2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3LzmMwlJvr7bwOxopUccVMUdqLOKOOFdU3D5-IVddZY.jpg?width=640&crop=smart&auto=webp&s=f37be8026bd5b2f023fa303709c8a01adc1c7964', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3LzmMwlJvr7bwOxopUccVMUdqLOKOOFdU3D5-IVddZY.jpg?width=960&crop=smart&auto=webp&s=35d7303e1803ee4c53432f854a87e5a3789c05a7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3LzmMwlJvr7bwOxopUccVMUdqLOKOOFdU3D5-IVddZY.jpg?width=1080&crop=smart&auto=webp&s=30ee02caf3310c26e2d1efd21a54b3d2722f9966', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3LzmMwlJvr7bwOxopUccVMUdqLOKOOFdU3D5-IVddZY.jpg?auto=webp&s=cd9fe69a025f296ee53fc9be10c5651f1e70fb8d', 'width': 1200}, 'variants': {}}]}
Python integration with Ollama2
1
Hi All, I have the python code to integrate it with ollama2, but it throws errors, kindly provide me directions. Python Code: output = [replicate.run](https://replicate.run)( "meta/llama-2-70b-chat:02e509c789964a7ea8736978a43525\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*", input={ "prompt": "Can you write a poem about open source machine learning?" } ) print(output) &#x200B; Error: python3 llamapi.py Traceback (most recent call last): File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connection.py", line 174, in \_new\_conn conn = connection.create\_connection( File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/util/connection.py", line 95, in create\_connection raise err File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create\_connection sock.connect(sa) ConnectionRefusedError: \[Errno 61\] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib\_response = self.\_make\_request( File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 398, in \_make\_request conn.request(method, url, \*\*httplib\_request\_kw) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connection.py", line 239, in request super(HTTPConnection, self).request(method, url, body=body, headers=headers) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 1282, in request self.\_send\_request(method, url, body, headers, encode\_chunked) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 1328, in \_send\_request self.endheaders(body, encode\_chunked=encode\_chunked) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 1277, in endheaders self.\_send\_output(message\_body, encode\_chunked=encode\_chunked) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 1037, in \_send\_output self.send(msg) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 975, in send self.connect() File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connection.py", line 205, in connect conn = self.\_new\_conn() File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connection.py", line 186, in \_new\_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fdc9f12cac0>: Failed to establish a new connection: \[Errno 61\] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen retries = retries.increment( File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(\_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdc9f12cac0>: Failed to establish a new connection: \[Errno 61\] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/hshah/llamapi.py", line 24, in <module> main() File "/Users/hshah/llamapi.py", line 20, in main data = get\_api\_data(url, headers) File "/Users/hshah/llamapi.py", line 6, in get\_api\_data response = requests.get(url, headers=headers) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/api.py", line 73, in get return request("get", url, params=params, \*\*kwargs) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, \*\*kwargs) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, \*\*send\_kwargs) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, \*\*kwargs) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/adapters.py", line 565, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdc9f12cac0>: Failed to establish a new connection: \[Errno 61\] Connection refused'))python3 llamapi.py Traceback (most recent call last): File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connection.py", line 174, in \_new\_conn conn = connection.create\_connection( File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/util/connection.py", line 95, in create\_connection raise err File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create\_connection sock.connect(sa) ConnectionRefusedError: \[Errno 61\] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib\_response = self.\_make\_request( File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 398, in \_make\_request conn.request(method, url, \*\*httplib\_request\_kw) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connection.py", line 239, in request super(HTTPConnection, self).request(method, url, body=body, headers=headers) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 1282, in request self.\_send\_request(method, url, body, headers, encode\_chunked) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 1328, in \_send\_request self.endheaders(body, encode\_chunked=encode\_chunked) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 1277, in endheaders self.\_send\_output(message\_body, encode\_chunked=encode\_chunked) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 1037, in \_send\_output self.send(msg) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 975, in send self.connect() File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connection.py", line 205, in connect conn = self.\_new\_conn() File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connection.py", line 186, in \_new\_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fdc9f12cac0>: Failed to establish a new connection: \[Errno 61\] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen retries = retries.increment( File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(\_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdc9f12cac0>: Failed to establish a new connection: \[Errno 61\] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/hshah/llamapi.py", line 24, in <module> main() File "/Users/hshah/llamapi.py", line 20, in main data = get\_api\_data(url, headers) File "/Users/hshah/llamapi.py", line 6, in get\_api\_data response = requests.get(url, headers=headers) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/api.py", line 73, in get return request("get", url, params=params, \*\*kwargs) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, \*\*kwargs) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, \*\*send\_kwargs) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, \*\*kwargs) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/adapters.py", line 565, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdc9f12cac0>: Failed to establish a new connection: \[Errno 61\] Connection refused')) python3 llamapi.py Traceback (most recent call last): File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connection.py", line 174, in \_new\_conn conn = connection.create\_connection( File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/util/connection.py", line 95, in create\_connection raise err File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create\_connection sock.connect(sa) ConnectionRefusedError: \[Errno 61\] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen httplib\_response = self.\_make\_request( File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 398, in \_make\_request conn.request(method, url, \*\*httplib\_request\_kw) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connection.py", line 239, in request super(HTTPConnection, self).request(method, url, body=body, headers=headers) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 1282, in request self.\_send\_request(method, url, body, headers, encode\_chunked) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 1328, in \_send\_request self.endheaders(body, encode\_chunked=encode\_chunked) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 1277, in endheaders self.\_send\_output(message\_body, encode\_chunked=encode\_chunked) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 1037, in \_send\_output self.send(msg) File "/Users/hshah/anaconda3/lib/python3.10/http/client.py", line 975, in send self.connect() File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connection.py", line 205, in connect conn = self.\_new\_conn() File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connection.py", line 186, in \_new\_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fdc9f12cac0>: Failed to establish a new connection: \[Errno 61\] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/adapters.py", line 489, in send resp = conn.urlopen( File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen retries = retries.increment( File "/Users/hshah/anaconda3/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(\_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdc9f12cac0>: Failed to establish a new connection: \[Errno 61\] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/hshah/llamapi.py", line 24, in <module> main() File "/Users/hshah/llamapi.py", line 20, in main data = get\_api\_data(url, headers) File "/Users/hshah/llamapi.py", line 6, in get\_api\_data response = requests.get(url, headers=headers) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/api.py", line 73, in get return request("get", url, params=params, \*\*kwargs) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, \*\*kwargs) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/sessions.py", line 587, in request resp = self.send(prep, \*\*send\_kwargs) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/sessions.py", line 701, in send r = adapter.send(request, \*\*kwargs) File "/Users/hshah/anaconda3/lib/python3.10/site-packages/requests/adapters.py", line 565, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdc9f12cac0>: Failed to establish a new connection: \[Errno 61\] Connection refused')) &#x200B;
2024-01-04T03:36:17
https://www.reddit.com/r/LocalLLaMA/comments/18y36kr/python_integration_with_ollama2/
Better_Run_1295
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y36kr
false
null
t3_18y36kr
/r/LocalLLaMA/comments/18y36kr/python_integration_with_ollama2/
false
false
self
1
null
Improving the odds
1
Last time an "AI is dead" post went viral, mixtral dropped 20 mins later. So I'm here to do my part and say it's been a boring 2 weeks.
2024-01-04T03:07:36
https://www.reddit.com/r/LocalLLaMA/comments/18y2l9l/improving_the_odds/
Sweet_Protection_163
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y2l9l
false
null
t3_18y2l9l
/r/LocalLLaMA/comments/18y2l9l/improving_the_odds/
false
false
self
1
null
Is it just me TINYLLAMA ?
1
Yesterday, i decided to start using TinyLlama for new project but when i tried it via the provided chat template, it worked like below provided example. Below is chat using provided chat template, it doesn't works as it should. <|system|> You are a chatbot.</s> <|user|> hi</s> <|assistant|> I'm not a chatbot, but I can provide you with some tips on how to Below is another template i found on Llama.cpp server You are a chatbot. User: Hi. Assistant: Hi there. How may I assist you today? Can you please provide the template which worked for you. ps: [this is the model](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0?text=You+are+a+chatbot.%0A%0AUser%3A+Hi.%0A%0AAssistant%3A)
2024-01-04T02:52:40
https://www.reddit.com/r/LocalLLaMA/comments/18y2a0i/is_it_just_me_tinyllama/
ExternalAd8105
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y2a0i
false
null
t3_18y2a0i
/r/LocalLLaMA/comments/18y2a0i/is_it_just_me_tinyllama/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oN89DCTlpN4ILjsqZ-eqHHBHOsMqFEAApHQdMqxL2uo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=108&crop=smart&auto=webp&s=86489a0d0a5efa5573fd0a7a1a298a1f686ca3fc', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=216&crop=smart&auto=webp&s=9cea7936f43ac604ef3149813fc9854023b2aa44', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=320&crop=smart&auto=webp&s=1a0690f3f071a646b295daadf7b164228473c273', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=640&crop=smart&auto=webp&s=53b19b2635f4a385189f055ade13dd0a16901758', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=960&crop=smart&auto=webp&s=b60d4609bb2dafdc91ceaedf6ce3b21ca29c3eb0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?width=1080&crop=smart&auto=webp&s=06eb581c8ee845b13de6674c30a6a1752067e175', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/sIemo2ZRTyfh8F28nwbT2_1K-FdX21f6h4iIinp0cts.jpg?auto=webp&s=1bcc0c096508e861f8770b90d7eddb84f7d706f2', 'width': 1200}, 'variants': {}}]}
Has anyone gotten Langchain to stream Hugging Face models with FastAPI?
1
I get responses from my model, but only when I run the model normally, and don't use the streaming=True parameter. Context to the issue here with code example: [https://github.com/langchain-ai/langchain/issues/15516](https://github.com/langchain-ai/langchain/issues/15516)
2024-01-04T02:49:07
https://www.reddit.com/r/LocalLLaMA/comments/18y27a2/has_anyone_gotten_langchain_to_stream_hugging/
HiddenMushroom11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y27a2
false
null
t3_18y27a2
/r/LocalLLaMA/comments/18y27a2/has_anyone_gotten_langchain_to_stream_hugging/
false
false
self
1
{'enabled': False, 'images': [{'id': '0hVVpjIlrBCL0Dnqd7-Mc7kuUdfwKFmvStgYH4X2Kgo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8qtH7WYjtmdzQKD2QteiFxDwZpsj6v2rXYsMJMe0JrQ.jpg?width=108&crop=smart&auto=webp&s=93a8937d0abe5cca339d2d80c76c758e793ca087', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8qtH7WYjtmdzQKD2QteiFxDwZpsj6v2rXYsMJMe0JrQ.jpg?width=216&crop=smart&auto=webp&s=1482d3d05ba2d14c3170ddd6f942bb30b1e6562d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8qtH7WYjtmdzQKD2QteiFxDwZpsj6v2rXYsMJMe0JrQ.jpg?width=320&crop=smart&auto=webp&s=a2492db4e82ad44cb1fc1e262d1ef10a5d6dbe04', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8qtH7WYjtmdzQKD2QteiFxDwZpsj6v2rXYsMJMe0JrQ.jpg?width=640&crop=smart&auto=webp&s=abee69c45bc6394a3178652a331daefb1d4f7671', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8qtH7WYjtmdzQKD2QteiFxDwZpsj6v2rXYsMJMe0JrQ.jpg?width=960&crop=smart&auto=webp&s=a7146e4f32aca27b6aa70aa8d867dfa39f5a2818', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8qtH7WYjtmdzQKD2QteiFxDwZpsj6v2rXYsMJMe0JrQ.jpg?width=1080&crop=smart&auto=webp&s=54631e322b3eba2a4b4d8f7244cd4f7ac4fa8467', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8qtH7WYjtmdzQKD2QteiFxDwZpsj6v2rXYsMJMe0JrQ.jpg?auto=webp&s=1cf8d13050ba103f18e11120cba2ff7a75d7e038', 'width': 1200}, 'variants': {}}]}
Generating longer responses to prompts with Mixtral-8x7B-Instruct Q4_K_M.gguf
1
I have a scrpt to run a chat with the Mixtral-8x7B-Instruct Q4\_K\_M.gguf model. The responses I get to my prompts, even when I ask for detail and examples, tends to be less than 1000 tokens, even when I set context length to something like 20000 tokens. My model load and generation parameters are modelParms\['n\_gpu\_layers'\] = 18 modelParms\['n\_ctx'\] = 20000 modelParms\['n\_threads'\] = 20 modelParms\['numa'\] = True modelParms\['verbose'\] = False modelParms\['use\_cache'\] = True modelParms\['num\_experts\_per\_tok'\] = 2 model = Llama(\*\*modelParms) genParms\['stream'\] = True genParms\['max\_tokens'\] = 24000 genParms\['stop'\] = '</s>' genParms\['echo'\] = False genParms\['temperature'\] = .001 genParms\['repeat\_penalty'\] = 1.0 genParms\['top\_k'\] = 0 genParms\['top\_p'\] = 1.0 genParms\['min\_p'\] = 0.02 genParms\['repeat\_penalty'\] = 1.0 genParms\['mirostat\_mode'\] = 0 genParms\['mirostat\_tau'\] = 5.0 genParms\['mirostat\_eta'\] = 0.1 Are there settings I can use that will result in longer, more detailed responses, or is this really all the model can come up with to respond to my queries? I did give it a silly prompt 'say "hello" 10000 times' and got about 6500 tokens and ended up killing it, so maybe the model really doesn't have that much to say.
2024-01-04T01:49:07
https://www.reddit.com/r/LocalLLaMA/comments/18y0w5j/generating_longer_responses_to_prompts_with/
catzilla_06790
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18y0w5j
false
null
t3_18y0w5j
/r/LocalLLaMA/comments/18y0w5j/generating_longer_responses_to_prompts_with/
false
false
self
1
null
What am I doing wrong?
101
2024-01-04T01:44:55
https://i.redd.it/gu7qisf4wbac1.png
slykethephoxenix
i.redd.it
1970-01-01T00:00:00
0
{}
18y0sru
false
null
t3_18y0sru
/r/LocalLLaMA/comments/18y0sru/what_am_i_doing_wrong/
false
false
https://b.thumbs.redditm…GnYnxr6YlpyY.jpg
101
{'enabled': True, 'images': [{'id': 'kviFixXngfL71o-ecesnJisnRzk6M15hvYwhKr2X9fM', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/gu7qisf4wbac1.png?width=108&crop=smart&auto=webp&s=679b6301bcec6f1884affd6d5c7074e0d7ad7428', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/gu7qisf4wbac1.png?width=216&crop=smart&auto=webp&s=465108b21e201f49d7b284cd697669a035b1d79b', 'width': 216}, {'height': 238, 'url': 'https://preview.redd.it/gu7qisf4wbac1.png?width=320&crop=smart&auto=webp&s=88d13a9e371bedeae0d065c052a0cc45deddec09', 'width': 320}, {'height': 476, 'url': 'https://preview.redd.it/gu7qisf4wbac1.png?width=640&crop=smart&auto=webp&s=1421ceb1c59b13e017dff1a3e2cd651a0ee3e60a', 'width': 640}, {'height': 714, 'url': 'https://preview.redd.it/gu7qisf4wbac1.png?width=960&crop=smart&auto=webp&s=cc05495c4cc5c0cc724867cb51cf06f44d2c8754', 'width': 960}, {'height': 803, 'url': 'https://preview.redd.it/gu7qisf4wbac1.png?width=1080&crop=smart&auto=webp&s=d4875292db0867128a116f55a6052ca76866fc38', 'width': 1080}], 'source': {'height': 1093, 'url': 'https://preview.redd.it/gu7qisf4wbac1.png?auto=webp&s=15ad07fa1fcfc0483f748eae241d7c510003c43d', 'width': 1469}, 'variants': {}}]}
How to store llm chat history efficiently?
1
Let's say you are playing with multiple llms to find out which one is better for various tasks. So you have 10 or 20 or 50 different models. Now you want to ask each one a question like "how many ducks are here if..." and collect answers. So far so good. You just need to store 50 strings, well, multiline strings. Now you want to try another question like "here is description of my Instagram profile, list ideas how to make it better". So now you have another 50 multiline strings. But then you have idea to modify first question about ducks and see how the answers change. And then you want to ask about Instagram profile in diferent way. Storing that in 50 different text files is difficult, because you need to copy both questions and answers which are long. I tried LibreOffice Base and LibreOffice Calc but it wasn't really user friendly for this specific task. What's your solution? Do you use some database or spreadsheet? I need to be able to quicky search and copy for different models and questions. So both database and spreadsheet sounds like a good idea but I wasn't able to setup it efficiently yet.
2024-01-04T01:04:11
https://www.reddit.com/r/LocalLLaMA/comments/18xzvwf/how_to_store_llm_chat_history_efficiently/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xzvwf
false
null
t3_18xzvwf
/r/LocalLLaMA/comments/18xzvwf/how_to_store_llm_chat_history_efficiently/
false
false
self
1
null
Guides on Fine Tuning Llama2 on Raw Text?
1
Been reading a lot of guides such as [Fine-Tune Your Own Llama 2 Model in a Colab Notebook](https://towardsdatascience.com/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32). However, its always geared towards the chat model. I'm more interested in fine-tuning the llama2 base model on raw text. I have 100k files of raw story text I've created and would like to create a llama 2 model to help do text completion on. Are there any guides on doing such a task? My biggest question mark is how to format the data that is inputted. Guides such as the one linked above are all focused on the chat models. I don't want anything to do with chat, though instruction models I guess could be helpful.
2024-01-04T00:41:18
https://www.reddit.com/r/LocalLLaMA/comments/18xzcoz/guides_on_fine_tuning_llama2_on_raw_text/
RayMallick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xzcoz
false
null
t3_18xzcoz
/r/LocalLLaMA/comments/18xzcoz/guides_on_fine_tuning_llama2_on_raw_text/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Yeu8GhXRVABCGX4WGdzXeKt4qv0JESjVdc5y25Ad-pg', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/LXZ1jt9hDM5O32_Wg7aZ8kP6NewUsuwY50FBiAOY0tc.jpg?width=108&crop=smart&auto=webp&s=92ae39122052d56ff84a737a6fc1881296116dc0', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/LXZ1jt9hDM5O32_Wg7aZ8kP6NewUsuwY50FBiAOY0tc.jpg?width=216&crop=smart&auto=webp&s=758342342219cb4b4ffa87fc23012ab8b6f00731', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/LXZ1jt9hDM5O32_Wg7aZ8kP6NewUsuwY50FBiAOY0tc.jpg?width=320&crop=smart&auto=webp&s=aa0dff4a25f98e6baffca458631186caf579aef7', 'width': 320}, {'height': 350, 'url': 'https://external-preview.redd.it/LXZ1jt9hDM5O32_Wg7aZ8kP6NewUsuwY50FBiAOY0tc.jpg?width=640&crop=smart&auto=webp&s=fa751d6b4fc4f530e7d220e83e67f4cbabbbda22', 'width': 640}, {'height': 525, 'url': 'https://external-preview.redd.it/LXZ1jt9hDM5O32_Wg7aZ8kP6NewUsuwY50FBiAOY0tc.jpg?width=960&crop=smart&auto=webp&s=b2166fbacec18689ce7d0e86a654f795a717ada5', 'width': 960}, {'height': 591, 'url': 'https://external-preview.redd.it/LXZ1jt9hDM5O32_Wg7aZ8kP6NewUsuwY50FBiAOY0tc.jpg?width=1080&crop=smart&auto=webp&s=1a9ad055eb346d3ae2f46a1f3395ff70d231bcf4', 'width': 1080}], 'source': {'height': 657, 'url': 'https://external-preview.redd.it/LXZ1jt9hDM5O32_Wg7aZ8kP6NewUsuwY50FBiAOY0tc.jpg?auto=webp&s=67915a903b2e11ddf6714228f85447b93e8b963f', 'width': 1200}, 'variants': {}}]}
Augmentoolkit — Easily Generate Quality Multi-Turn Data based on Human-Written Documents, using Local Models. Painlessly Finetune AI on Specific Domains.
1
\[This tool is being released alongside a synthetic demo dataset — 1778 conversations with 14k lines of dialogue across them\] **Model creators should not be data annotators.** **Yet if we want to create a unique fine tune, this is what we spend most of our time doing** — either chatting with bots and editing their responses to generate hybrid datasets (which then we then can't actually open source, due to the sensitive nature of the chats), or burning hundreds of dollars on the OpenAI API to get data from a model whose writing style we probably hate (otherwise we wouldn't be here). And if you use the OAI API, you'll probably have to manually edit a bunch of those responses anyway to purge GPT-isms (e.g., 'ministrations'). **There are a few typical problems people seem to run into**, comedically summarized in the flowchart below. I, personally, fell into the OpenAI API trap with the original Augmental (this follows up on that project). &#x200B; [Data needs to be fast, shareable, and scaleable. Ideally it'd be easy to make too.](https://preview.redd.it/h9bd3n0febac1.png?width=632&format=png&auto=webp&s=5c047c64be17c9ae00de3ecb1bd44716c54f384d) So, **getting data for finetunes sucks right now for people in the open-source community.** We don't have users or contractors we can use for the job like closed-source can. The relative difficulty of making data might be why merges are far more common. **But the solution seems obvious: we've made machines that write, so let's get the machines to do our data writing for us!** Turns out this is really hard, because open-source models can be inconsistent and hard to control. But through part-time work over the course of the last three months **I think I've made something functional, maybe even good.** &#x200B; [https:\/\/github.com\/e-p-armstrong\/augmentoolkit](https://preview.redd.it/qxgy1jf0fbac1.png?width=934&format=png&auto=webp&s=e3f6d845453007354649496974939131cbb2da61) [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit) is my attempt at solving our data problems. Put simply, **Augmentoolkit is a way to make instruct-tuning data using compute and plaintext file(s) containing information about a subject. It focuses on accuracy, congifurability, and having a low barrier-to-entry. You can run most of it with a 13b (or all of it, settings-dependent). It's a Jupyter Notebook, so it should be easy to use and debug.** It can generate RP-style data or user-assistant style data (though only the former has been extensively refined), so it's suitable for a whole bunch of different use cases. **The RP-style convs have scenarios and character cards to match the conversations.** At a high level, [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit) takes documents, generates questions (and their answers) based on the testable information in the documents, and then generates conversations between two characters in which different groups of those questions are asked and answered. **Here's a more visual breakdown of some of the features, because walls of text need variety.** You can also read about basically all of this in the project's README. This tool's mascot, Augmentan-2, also makes a cameo. The tool got a new name from the previous one (Augmental) but she didn't because I couldn't think of another clever pun. &#x200B; [No, I will not stop giving the things I make Anime mascots](https://preview.redd.it/mqup5g4bfbac1.png?width=612&format=png&auto=webp&s=b3484090cd466e60b414127d1131cbe59398777a) **Augmentoolkit tries to allow basically anyone to make a good dataset about basically anything.** At the very least, it shows that an automated approach involving converting human-written text is viable, and **it** **provides a foundation that you can build upon for your specific needs.** **It's meant to reduce** (and possibly, with enough improvement, remove) **data as a significant pain point for model creators.** I want this to help democratize (and make scaleable) data generation. Even if you write really good data for RP bots, your writing ability cannot be 10Xed or 100Xed in scale—but your prompts CAN be. I want [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit) to introduce some much-needed automation into this area of people's workflows, since though we've heard a lot about the idea that data quality and quantity are paramount, actually getting a lot of data has been out of reach for most people. Now, hopefully, people can combine their GPUs to produce massive datasets that stick around forever (far more parallelizable and verifiable than distributed training); or use them individually to make data in their own niches of interest. Plus, you can completely **specialize** [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit) for a specific type of text just **by changing the few-shot examples to be from your type of text** — so you don't even necessarily need to do a huge amount of coding to completely revamp what this does, and turn it from a jack of all trades into a master of one. All you need to do is write English. Theoretically, anyone with a good enough GPU (or enough money to rent one for a couple of days; the rate is about \~$0.67 CAD/hr for an A6000 last I checked) can now create their very own dataset to serve as the core of their finetunes. Creating domain experts should also be much easier. **How is this different than just training on the raw books? The data this generates is conversational and multi-turn, so it is useful for fine-tuning instruct-tuned models. Here's an example of an RP-style conversation from an old test run of the pipeline:** &#x200B; [It's capable of generating evil characters, clearly](https://preview.redd.it/sgt0zehqgbac1.png?width=2256&format=png&auto=webp&s=b9847470cbb16c572e49966fff2e1b0df2b7d83a) **Here's another example from the latest run. A bit less of an exemplar, but still decent (possibly more representative of most of the samples). Character cards are similar to AliChat format.** &#x200B; [Yes, it can NSFW. In fact 1\/3rd of the characters are flirtatious by default, so that RP finetuners can go wild.](https://preview.redd.it/ghklazwzgbac1.png?width=1777&format=png&auto=webp&s=8691c5c26e0ffb91b59c377ce623acf6f3c3f375) # Want to make your own dataset using open-source models? Here are some Links: # [Augmentoolkit Repo](https://github.com/e-p-armstrong/augmentoolkit) # [Demo Dataset](https://huggingface.co/datasets/Heralax/Augmentoolkit-demo) # [Project Gutenberg](https://www.gutenberg.org/) <— Great for finding plaintext to make data from &#x200B; As an aside, I can potentially see the question-answer part of [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit)\-created datasets potentially being useful for Retrieval Augmented Generation, because if I remember rightly there are models that can match a query with an answer. So the first half of Augmentoolkit could possibly be invaluable for people trying to make a knowledge base more searchable by LLMs, though this is definitely not my area of expertise nor the intended use of [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit). Either way, the raw question-answer pairs used to make each conversation are saved along with that conversation uploaded dataset, so if you want to experiment here you can. **I want to make clear** that right now there are some problems, and a good number of the examples you open up, if you just randomly inspect them, will probably have slight things that put you off. But a) many of them won't; b) you can improve quality by changing settings (and if you're really hardcore, prompts) specifically for your needs and type of input text; c) even in examples WITH issues, the issues may be minor and the data is still probably beneficial to a model overall; and d) Assistant Mode is a bit less error-prone from my limited testing of it, so if you're perfectionist, you can use that. e), at least it's not as bad as PIPPA. *Damn, I've taken so many shots at PIPPA it might be hard to repost this on the Pygmalion subreddit. Oh well.* **Bonus flowchart:** &#x200B; [https:\/\/github.com\/e-p-armstrong\/augmentoolkit](https://preview.redd.it/7uej5wpphbac1.png?width=925&format=png&auto=webp&s=ea4ed7e5722f35c73995e920c451c8481a625624) # FAQ >"How expensive is it?" Since it uses local models, the price all depends on what GPUs you rent (or own, in which case it's free), and how long you're willing to wait. If, for instance, I had rented 3090s and used Q\_6 quants of Flatorcamaid for all but the last step of the pipeline, I could have done things about 3x cheaper (instead I used A6000s and Q\_8s). Still really bitter about that ):< Let it be known: A6000s may be cheap individually, but renting 3 of them for days adds up. Experiment and explore on something that can run a 70b, but when it comes down to creating a dataset off of an entire text, you'll want to do all but the last step on as cheap a machine as you can manage. Or on your own computer. I bet an aggressively-quanted 70b should do fine. &#x200B; >"How fast is it to run?" This is hardware-dependent, but it took about 4.5 days for 3 A6000s rented via [Vast.ai](https://Vast.ai) to make the demo dataset. Using A6000s was a stupid decision for a bunch of reasons, namely: they're about as fast or slower than 3090s for this usecase (they were running 13bs for most of that time), and they're 3x as expensive. Point being: how fast is it? I don't know! Because I didn't run it in a cost-efficient and time-efficient way. You can always find out for yourself though, lol. &#x200B; >"What texts did you use for your dataset, and why?" *Principles of Chemistry by Demitry Mendeleev* — because I wanted some knowledge from a science that everyone knows a bit about, and this was available on Gutenberg. Also the intro to this book is surprisingly philosophical and might give a model some neat ideas about knowledge and keeping up with a rapidly-growing field, so it's relevant to us. Naturally some of the information in this book is going to be very out of date — Mendeleev didn't even know what a proton was. But that itself makes for an interesting test — can models learn outdated/wrong information using data generated from the Augmentoolkit, and does that learning overwrite up-to-date information? NOTE: Not all of this book was used, to save time. It's very, very long. Also, the questions based on markdown tables that somehow passed the filter are probably BS. Lots of the stuff generated from this book is pretty good though. *On Liberty by John Stuart Mill* — I wanted to see how it would handle a fully philosophical and opinionated text. The answer seems to be "pretty well", which means that those few-shot examples from Plato's The Republic and Nietzsche's Thus Spake Zarathustra paid off. I haven't looked at this one's outputs much but I can't see why it'd be awful. *On War by Carl von Clausewitz* — So it can help me plan my takeover of the world, muahahaha. So I can see how well it can learn information that probably doesn't come up too much in its pretraining data. Also, because Clausewitz is cool. Also, because I saw it while browsing Gutenberg and thought it'd be interesting to add. From the few outputs I've looked at from here I'd say it's good. Augmentoolkit by default excels on texts with lots of factual (and a bit of understanding-based) information (that's not numbers-heavy or filled with really tough language). *Simple Sabotage, by the Office of Strategic Services* — This one was originally a curiosity add during my testing, but I kept it in the final product to show off how Augmentoolkit handles manual-style texts by default. Now models trained on the dataset can tell you how to delay trains, set fires, be bad at your job, etc. Came out decently, so manuals work for the pipeline too. *Introduction to Logic and Critical Thinking by Matthew Van Cleave* — By far the least-famous text in this list, I wanted to see if making the model read a logic textbook would teach it to think better, or at least understand the concept of thought better. It mucked up the bits with end-of-chapter exercises but lots of other stuff came out nicely. It might be better to train examples from this text WITH THE SOURCE TEXT INCLUDED IN THE PROMPT and a special instruction that both characters know that information, since a ton of the conversations refer to in-chapter examples that just don't make sense out of context. A cautionary tale about the importance of removing such things, or adjusting the text suitability prompt, for textbooks. &#x200B; >"Do you have a handy flowchart that shows exactly what all the steps are in Augmentoolkit, and how they fit together?" Why, yes, I do; thank you for the extremely convenient question. &#x200B; [And here I thought I'd never use UML](https://preview.redd.it/us9ozur9ibac1.png?width=906&format=png&auto=webp&s=dc0db1089fe62c906ce2e05136c7e3b38fa44e72) &#x200B; >"You missed an opportunity by having Augmentoolkit focus on teaching knowledge rather than skills, understanding, and chain-of-thought!" I didn't miss an opportunity, I just wanted to release this thing faster. I have some ideas for how to extend this; some are listed at the bottom of the repo. If you have a world-changing idea that you can build into this, please preempt me and do it, we're all better off for the innovation. &#x200B; >"The old Augmental dataset was better for RP!" I don't doubt it. That one was built specifically for RP, whereas this also attempts to teach the model factual information. This leads to less diversity of scenarios and a repetitive conversation format, even though it does use a wide variety of character personalities. I bet that if you made an Augmentoolkit completely focused on RP, you could recover that performance; as it stands, [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit) is meant to be a jack of all trades so that I can see what kind of model creator finds it most useful (and also so that all different types of model creator can see it's at least somewhat viable for their use cases, and hack it to specialize in those). Also, Cinematika SOMEWHAT fulfills a similar role for RP, though I do not know how well, as I've never tried it. &#x200B; >"I saw some crappy data entries in your dataset!" *Yeah, I did too.* I probably saw a lot more than you, in fact. Some of these are due to the input text, some are due to a focus on generalization, and some are due to "I haven't fixed it yet." One issue is that [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit) is currently a bit too permissive with what it considers paragraphs worthy of having questions asked about them; this can lead to a number of lower-quality examples, if you don't manually prune the text for things such as end-of-chapter exercises or markdown tables. Occasionally many, sometimes most (depending on the input text, intro to logic is nasty at times), training examples will have one bad question in there due to this choice (originally made because I don't want a too-strict prompt to prevent people from using texts I haven't thought of trying as inputs). There are also, to be sure, a ton of bugs and inconsistencies where a bit more TLC could fix all the issues. The only thing is that TLC takes time. Important to note, too: many of the quality problems are caused by text-specific quirks that the few-shot examples do not account for, and this is necessarily the case, because the variety in all the plaintext out there is enormous and no prompt can account for it all. I tried to account for a lot, but I missed some stuff. Only 2 of the texts used in the dataset were tested on during development, and even then, only the first few sections of those texts were fed through the full pipeline at all before about 6 days ago. Key takeaway: if you want [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit) outputs to be really perfect, either you'll have to remove special features from the input text that are likely to give it hiccups, or you'll have to modify the few-shot examples in a small handful of key files (see point #5 in this section of the README\[link\]) to handle your kind of input text. All in all, I think the dataset is still mostly high-quality — at the very least, it's probably no more broken than the original Augmental dataset, which due to poor GPT-4 instruction following, had more than a few completely broken examples (and that dataset is still decently popular; IIRC the winner of the Chai Prize uses it alongside two other datasets for their model). And the effort expended in modifying some examples surely pales in comparison to manually creating a dataset of thousands of rows. What this long ramble is trying to convey is: [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit) is meant to be useful by default, and despite many glaring issues, I think it is really, really useful. But it's also an early release; and on top of that, it's meant to be a foundation for more specialized augmented data generation. So it won't be anywhere near perfect. However, the code is decently simple, most of the changes you'll have to make are just prompts, and the key parts are pointed out by the README, so it should be pretty easy to customize if the quality or types of output are not what you're looking for. Fundamentally I'm releasing it, despite the large 'known issues' list, because I think that even with its problems [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit) is still a workable solution to a dire problem many model creators face. And because I think other people can do some really cool shit with it, and that it's selfish to keep hoarding it on my hard drive because of perfectionism. As Reid Hoffman said, "If you are not embarrassed by the first version of your product, you've launched too late." [Augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit) isn't a product, but the principle still holds. &#x200B; >"Why did you never release a 70b of Augmental?! You said you would!! I'm never trusting you again! ):<" Sorry! The story is that immediately after releasing Augmental, I had to [fix Augmental](https://huggingface.co/Heralax/Augmental-13b-v1.50_B), because my hyperparameters were garbage the first time. And after fixing it, I'd had the idea for this project, which (hilariously) was meant to take a weekend to do but ended up taking 3 months, during which I routinely chose working on this over studying for exams (gotta advance the human race, right?). That lack of time, combined with an inferiority complex about data quality in the original Augmental dataset, made me keep deciding to put a 70b off until I could finish this. Now that this is done (or at least, released), I might combine the old Augmental dataset with this one + some more stuff and do a 70b. But I'm not going to make the same mistake of setting a specific timeline. Also, if you have a 70b-capable machine, consider making and sharing some Augmentoolkit datasets while you wait for me to do this lol. I might very well use them! >"Why didn't you use Mixtral and instead used a combination of Llama models? That would solve issues caused by very high RoPE!" I recently implemented an [experimental Mixtral branch](https://github.com/e-p-armstrong/augmentoolkit/tree/mixtral-ver), it seems to work well -- very smart -- although a bit more slowly (and it's prone to infinite repetition). I'm open to sampler improvements. Maybe that's a challenge for kalomaze. That's all for this post, I'll try to answer questions and comments as much as I can! Hope to see you over in [the repo](https://github.com/e-p-armstrong/augmentoolkit)! Also belated Happy New Years, r/LocalLlama! Here's to another year of innovation!
2024-01-04T00:37:33
https://www.reddit.com/r/LocalLLaMA/comments/18xz9it/augmentoolkit_easily_generate_quality_multiturn/
Heralax_Tekran
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xz9it
false
null
t3_18xz9it
/r/LocalLLaMA/comments/18xz9it/augmentoolkit_easily_generate_quality_multiturn/
false
false
https://b.thumbs.redditm…LGIHp8yTcWcM.jpg
1
null
How can I get the model to choose the next word from a list?
1
Can I do this with only huggingface transformers library and pytorch? &#x200B; Aside from libraries like Microsoft guidance or LMQL, what else can I do to achieve this? &#x200B; Also, can I get the model to stop generating after it reaches a certain phrase, using the same approach?
2024-01-04T00:05:04
https://www.reddit.com/r/LocalLLaMA/comments/18xyi9f/how_can_i_get_the_model_to_choose_the_next_word/
manjimin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xyi9f
false
null
t3_18xyi9f
/r/LocalLLaMA/comments/18xyi9f/how_can_i_get_the_model_to_choose_the_next_word/
false
false
self
1
null
How might i approach this problem with an LLM
4
I have thousands of files, with synopsis text like this: Brian Barnes talks about all manner of plastic objects with Cameron Bale, Professor of Marketing at the School of Business, Alberta University; John Downey Professor of Materials & Society at University College London; Stephen Cowling Professor Emerita of History at the University of Delaware and founder of the Museum of Plastic; space archaeologist Dr. Kate Reilly from Flinders University in Australia. I was able to use spaCy to extract the names: Brian Barnes Cameron Bale John Downey Stephen Cowling Kate Reilly (Annoyingly, not their title (Dr. Kate Reilly) but thats ok, i can work with it.) but spaCy isn't ideal for gettiing job titles, i think this would be a great use for an LLM So what i would like to do, is use a local llm, to provide me with a dictionary of people, their job title and their university, or place of work). I reckon, i could take a percentage of these and create training data using chatGPT, but ultimately i would like to do this on a trained local model. But then, i'm a wee bit lost, is this a job for RAG? for finetuning? is RAG finetuning? Should i create training data this way? what is the best way to fire this training data at my local llm to get it to learn. I'm at the 'ok, i have watched every video on youtube, now i need to start building things and solving problems' stage :-) so thanks in advance for your pointers and help
2024-01-03T23:26:58
https://www.reddit.com/r/LocalLLaMA/comments/18xxkvr/how_might_i_approach_this_problem_with_an_llm/
toastymctoast
self.LocalLLaMA
2024-01-04T10:56:11
0
{}
18xxkvr
false
null
t3_18xxkvr
/r/LocalLLaMA/comments/18xxkvr/how_might_i_approach_this_problem_with_an_llm/
false
false
self
4
null
MLX now supports Huggingface models
1
[https://github.com/ml-explore/mlx-examples/tree/main/llms/hf\_llm](https://github.com/ml-explore/mlx-examples/tree/main/llms/hf_llm) &#x200B; &#x200B;
2024-01-03T23:22:35
https://www.reddit.com/r/LocalLLaMA/comments/18xxh42/mlx_now_supports_huggingface_models/
mzbacd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xxh42
false
null
t3_18xxh42
/r/LocalLLaMA/comments/18xxh42/mlx_now_supports_huggingface_models/
false
false
self
1
{'enabled': False, 'images': [{'id': '3JR7osB_O6zbmBawe4K_8J5EaNl6aB0wzCNvD3Fiz-Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/D0C4x6gsDNhVPIpyEdmz7JcFuOQAcyGRJ89H-lEtPMk.jpg?width=108&crop=smart&auto=webp&s=8339b848b61b09caca6889e4551d19a627ceaa0a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/D0C4x6gsDNhVPIpyEdmz7JcFuOQAcyGRJ89H-lEtPMk.jpg?width=216&crop=smart&auto=webp&s=e94324ce74059193d0a1af51b4ac8f4401850817', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/D0C4x6gsDNhVPIpyEdmz7JcFuOQAcyGRJ89H-lEtPMk.jpg?width=320&crop=smart&auto=webp&s=1c4de37b7beea51bfa8d9da3c85511e30fdfd7ec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/D0C4x6gsDNhVPIpyEdmz7JcFuOQAcyGRJ89H-lEtPMk.jpg?width=640&crop=smart&auto=webp&s=60f68f38a36a62ecff8461b32e12703538d7a08d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/D0C4x6gsDNhVPIpyEdmz7JcFuOQAcyGRJ89H-lEtPMk.jpg?width=960&crop=smart&auto=webp&s=29c9c7f302a61d3d4b5ccbbf0e45c63154b5711a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/D0C4x6gsDNhVPIpyEdmz7JcFuOQAcyGRJ89H-lEtPMk.jpg?width=1080&crop=smart&auto=webp&s=4ce5c5e8c6af1860d7d77547db00f19e9d6d72e0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/D0C4x6gsDNhVPIpyEdmz7JcFuOQAcyGRJ89H-lEtPMk.jpg?auto=webp&s=989c2729c3e40bc2dfe9e232640caeca634e847b', 'width': 1200}, 'variants': {}}]}
Speculative LLM UI — pull to elaborate / pinch to summarize
1
2024-01-03T22:05:00
https://v.redd.it/k033yvvisaac1
neilsonks
v.redd.it
1970-01-01T00:00:00
0
{}
18xvju2
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/k033yvvisaac1/DASHPlaylist.mpd?a=1706911515%2CNWFjYTdiMzU3YWNlNTJhYWFmZDdiMTk3MDNiYTA3ZTAyNmM2OGU2NWMwY2JlNDRhNDdlMjRmNDljMTgxODE0Nw%3D%3D&v=1&f=sd', 'duration': 14, 'fallback_url': 'https://v.redd.it/k033yvvisaac1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 1152, 'hls_url': 'https://v.redd.it/k033yvvisaac1/HLSPlaylist.m3u8?a=1706911515%2COTM3YmJlMTY4MDhkNmE3OTNkODQ5NzQzM2E3Y2Y2NmVjZmU1NzVkYWE2ZWMzZDJlMjg2NWM0Njk4NzYxY2Y3MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/k033yvvisaac1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}}
t3_18xvju2
/r/LocalLLaMA/comments/18xvju2/speculative_llm_ui_pull_to_elaborate_pinch_to/
false
false
https://external-preview…2f43726c805ffced
1
{'enabled': False, 'images': [{'id': 'eG41NjE0d3JzYWFjMQn98U4drOtPZBPkQY4zD8xdEpfu3OsC1p55PDMvSHEz', 'resolutions': [{'height': 172, 'url': 'https://external-preview.redd.it/eG41NjE0d3JzYWFjMQn98U4drOtPZBPkQY4zD8xdEpfu3OsC1p55PDMvSHEz.png?width=108&crop=smart&format=pjpg&auto=webp&s=36e6897eb87170e6697c4715a238b980d5375ec0', 'width': 108}, {'height': 345, 'url': 'https://external-preview.redd.it/eG41NjE0d3JzYWFjMQn98U4drOtPZBPkQY4zD8xdEpfu3OsC1p55PDMvSHEz.png?width=216&crop=smart&format=pjpg&auto=webp&s=c7b890ec057ec4e14fcc8ed235f5fa91f91fa0fe', 'width': 216}, {'height': 512, 'url': 'https://external-preview.redd.it/eG41NjE0d3JzYWFjMQn98U4drOtPZBPkQY4zD8xdEpfu3OsC1p55PDMvSHEz.png?width=320&crop=smart&format=pjpg&auto=webp&s=a4b886d68e5590d5f4dc7151a4da8dbe1695688d', 'width': 320}, {'height': 1024, 'url': 'https://external-preview.redd.it/eG41NjE0d3JzYWFjMQn98U4drOtPZBPkQY4zD8xdEpfu3OsC1p55PDMvSHEz.png?width=640&crop=smart&format=pjpg&auto=webp&s=25d387107c6a2c3495e0602bd0339773ab3ac308', 'width': 640}], 'source': {'height': 1280, 'url': 'https://external-preview.redd.it/eG41NjE0d3JzYWFjMQn98U4drOtPZBPkQY4zD8xdEpfu3OsC1p55PDMvSHEz.png?format=pjpg&auto=webp&s=c18397af1167997e53d49323aa1e30d31c24273a', 'width': 800}, 'variants': {}}]}
So I was thinking about the maths problem.
1
It should be reasonably simple to detect and execute equations as they are found in text and repair the LLM output. You could even put your own math fixing in from browser facing. detect equation, stop generation, parse and maybe RAG and check equation. calculate valid solutions and continue response then replace bad numbers with proper from the memory as they come up. I'm not really a math man, it might end up being a challenge to get function equivalencies or translate math to executable equivalent. Maybe some champ made a library that takes Unicode, who knows? Phones might be grumped but just parse all the text for numbers, calculate a all present valid solutions, and stop and fix by script when expected wrong numbers are presented within x words. With this it could at worst confuse a value but should improve. Its only so many cases to account for teens and thirty etc to clean parse most numbers.
2024-01-03T21:06:43
https://www.reddit.com/r/LocalLLaMA/comments/18xu3v5/so_i_was_thinking_about_the_maths_problem/
aseichter2007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xu3v5
false
null
t3_18xu3v5
/r/LocalLLaMA/comments/18xu3v5/so_i_was_thinking_about_the_maths_problem/
false
false
self
1
null
Real-Time Object Detection
1
Hello, I'm trying to find a realtime object detection model. So far I've only come across YOLO (You Only Look Once) but I'd love to see some other options.
2024-01-03T20:48:04
https://www.reddit.com/r/LocalLLaMA/comments/18xtn7l/realtime_object_detection/
mmkostov
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xtn7l
false
null
t3_18xtn7l
/r/LocalLLaMA/comments/18xtn7l/realtime_object_detection/
false
false
self
1
null
Optimize Mistral Inference Speed
1
I have a list of 500 prompts (python list) (different sizes between 1000 and 1700), and I want to use Mistral to predict one single word (Formal or Informal). I have access to V100 32GB. I used vLLM with the not quantized version of Mistral, it takes 5 minutes to finish the 500 prompts. Then I used GPTQ Mistral with vLLM, which takes 5 minutes as well (no gain in terms of speed). Both methods took 90% of the GPU memory. What do you suggest?
2024-01-03T20:32:15
https://www.reddit.com/r/LocalLLaMA/comments/18xt970/optimize_mistral_inference_speed/
kekkimo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xt970
false
null
t3_18xt970
/r/LocalLLaMA/comments/18xt970/optimize_mistral_inference_speed/
false
false
self
1
null
Text to command processing is possible on a cheap CPU server
1
LLMs can be used to convert spoken text/commands into JSON that can be understood by any app, website or assistant. I found out that with proper caching and a JSON grammar you can easily host your inference on a cheap CPU server. I'll call this type of application TTC (Text To Command) from now on. ## What does a TTC application do? TTC lets you predefine a list of commands, such as "open browser", "open email", "take notes" etc in JSON. A string of text is passed to the LLM prompt, which then outputs the most relevant command. The advantage of this approach is that it doesn't matter what exactly the user says, there is a lot of room for speaking in your own natural style of speech. Consider for example the "open browser" command, you could say "open firefox", "open my browser", "hey can you open my browser, please?", etc. An LLM can understand all of these. ## Architecture You need 1. llama.cpp 2. A list of commands in JSON format 3. A custom gbnf grammar based on your commands list, you can generate these with the [typescript to gbnf generator service](https://github.com/IntrinsicLabsAI/gbnfgen) or you can write your own 4. A solid few-shot prompt template, include the list of commands, then follow up with a good amount of examples (5+) and end with `"The user said: {REPLACE_ME}\nOut:` 5. Enable prompt caching in llama.cpp 6. Any model smart enough to understand your domain and intention If you want to go the voice assistant approach, you can put OpenAI whisper or any other TTS service in front of your TTC application as well. It is probably desirable to add a voice-activation-detector (VAD) too. Check out https://picovoice.ai/platform/eagle/ and https://github.com/ricky0123/vad for ideas. I started with ricky0123/vad but switched to picovoice eagle for personalized voice recognition. ## The model Funnily enough, you don't need super powerful models here. Small models are intelligent enough to understand intention. The only thing you need to keep in mind is that the model does need to understand the specific domain you're working in, so finetuning may be necessary. For my domain, any intelligent model that speaks English will suffice. My current deployment uses zephyr-3b Q6. I'm not joking, a quantized 3B is good enough. I'm also willing to try tinyllama-1.1b Q6/Q8, these are likely good enough. The power in this use-case comes from the few-shot prompt. If there's one thing LLMs excel at, it's picking up a task from a few-shot prompt. ## The server I found a very basic server on Azure (Standard F4s) that is good enough for during development, it's a quad core intel server with 8GB RAM. That's it. No bells, no whistles. You'll want to install BLIS (see llama.cpp repo) for faster inference. I'm using this command after installation ./server -m models/zephyr-3b.Q6_K.gguf -t 4 --host 0.0.0.0 --port 8080 Given that this task is only fast due to caching, the first time the prompt gets processed will be very slow. In my case, my 399 token few-shot prompt takes over 40 seconds to evaluate. print_timings: prompt eval time = 44806.32 ms / 399 tokens ( 112.30 ms per token, 8.90 tokens per second) print_timings: eval time = 2581.64 ms / 18 runs ( 143.42 ms per token, 6.97 tokens per second) print_timings: total time = 47387.96 ms slot 0 released (418 tokens in cache) This is after the prompt has been cached print_timings: prompt eval time = 1297.37 ms / 14 tokens ( 92.67 ms per token, 10.79 tokens per second) print_timings: eval time = 2537.86 ms / 18 runs ( 140.99 ms per token, 7.09 tokens per second) print_timings: total time = 3835.23 ms slot 0 released (419 tokens in cache) The reason this is so fast is that after processing the prompt, the grammar constrains the output to only a few tokens: `{ "command": "..." }`. The LLM only has to generate 10-ish tokens, which even on a VPS with shared cores will be more than fast enough. This is very fast for such a dumb server set-up! It's probably possible to speed this up even more: 1. By switching to an even smaller model or quant such as Tinyllama-1.1B. 2. The next optimization step would be to find a more suitable server with higher memory bandwidth, and maybe some sort of dedicated AI/mulmat accelerator? 3. The prompt can be shortened even further, there's some redundancy in my commands. I have 6 commands, two of which are negations of one another, so they can be reduced to 4 commands saving some tokens --- Hope this is interesting or helpful to someone!
2024-01-03T20:06:32
https://www.reddit.com/r/LocalLLaMA/comments/18xsmnn/text_to_command_processing_is_possible_on_a_cheap/
Combinatorilliance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xsmnn
false
null
t3_18xsmnn
/r/LocalLLaMA/comments/18xsmnn/text_to_command_processing_is_possible_on_a_cheap/
false
false
self
1
{'enabled': False, 'images': [{'id': 'imNsGKTVZj7D2Y0Sdcw_5R98c-1U6F2_QH6EEaqGTyw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WMm-VVYEworRRcC2y-TMVA7jirANNaYrMQYLQfi8kSk.jpg?width=108&crop=smart&auto=webp&s=dc107336bae4ed781ac6a83e58c22cfe676f4fac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WMm-VVYEworRRcC2y-TMVA7jirANNaYrMQYLQfi8kSk.jpg?width=216&crop=smart&auto=webp&s=2415f420247db8fdd8440fb6fb6e78b1694a9ddb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WMm-VVYEworRRcC2y-TMVA7jirANNaYrMQYLQfi8kSk.jpg?width=320&crop=smart&auto=webp&s=8a936e25db8c83e2e54cd9f7477828cbc4dfefa4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WMm-VVYEworRRcC2y-TMVA7jirANNaYrMQYLQfi8kSk.jpg?width=640&crop=smart&auto=webp&s=22acb4f165fb1e25d76dd031923eca74a0f84be8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WMm-VVYEworRRcC2y-TMVA7jirANNaYrMQYLQfi8kSk.jpg?width=960&crop=smart&auto=webp&s=03d504f14fa076e243c1d97c3199eecee235cbe2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WMm-VVYEworRRcC2y-TMVA7jirANNaYrMQYLQfi8kSk.jpg?width=1080&crop=smart&auto=webp&s=d31dc0537b7ec56c96546117da93ae39cf066f61', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WMm-VVYEworRRcC2y-TMVA7jirANNaYrMQYLQfi8kSk.jpg?auto=webp&s=536d80ba085118b9a0a0abf76b3b99287e107738', 'width': 1200}, 'variants': {}}]}
Built a LM Studio like app during the holidays. What should I add next? Currently it supports Mistral GGUF, but I am hoping to add an OCR for document processing.
1
2024-01-03T20:02:15
https://v.redd.it/s6krd8mj6aac1
GoodUnderstanding728
/r/LocalLLaMA/comments/18xsivb/built_a_lm_studio_like_app_during_the_holidays/
1970-01-01T00:00:00
0
{}
18xsivb
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/s6krd8mj6aac1/DASHPlaylist.mpd?a=1706990538%2CNjJlNzRiYzcxMGNiZDYzM2JmMzQ0MzAyNWI2Nzg3OWVmZGExMTVlNGRhZjI1OGNhM2MxYTM4MjhiNmQ1ZDAyNw%3D%3D&v=1&f=sd', 'duration': 67, 'fallback_url': 'https://v.redd.it/s6krd8mj6aac1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/s6krd8mj6aac1/HLSPlaylist.m3u8?a=1706990538%2CMWE5ZThiZWUxMTc2NjgyMjZiYTBjNzY5NmVlNTFkZDc5YzY4Zjg2YjAwMDQ0NmU5ZWVmOWYxZjE4MjgyNGE4Mw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/s6krd8mj6aac1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1898}}
t3_18xsivb
/r/LocalLLaMA/comments/18xsivb/built_a_lm_studio_like_app_during_the_holidays/
false
false
https://external-preview…18b57be63c95ffad
1
{'enabled': False, 'images': [{'id': 'ZTA2MHJ6dHo2YWFjMboK6TruKzvRD2poVx57Y7-ZeO4AiUOOZgnus5Qotb-m', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/ZTA2MHJ6dHo2YWFjMboK6TruKzvRD2poVx57Y7-ZeO4AiUOOZgnus5Qotb-m.png?width=108&crop=smart&format=pjpg&auto=webp&s=71226d4b14ccec241688707d1fc39a070d1c13cb', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/ZTA2MHJ6dHo2YWFjMboK6TruKzvRD2poVx57Y7-ZeO4AiUOOZgnus5Qotb-m.png?width=216&crop=smart&format=pjpg&auto=webp&s=5a296077f81ed9eb9229b31caeb822322f8673fc', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/ZTA2MHJ6dHo2YWFjMboK6TruKzvRD2poVx57Y7-ZeO4AiUOOZgnus5Qotb-m.png?width=320&crop=smart&format=pjpg&auto=webp&s=6e8ffd6be6eab267ec53e95f866e1dc74b6d992f', 'width': 320}, {'height': 364, 'url': 'https://external-preview.redd.it/ZTA2MHJ6dHo2YWFjMboK6TruKzvRD2poVx57Y7-ZeO4AiUOOZgnus5Qotb-m.png?width=640&crop=smart&format=pjpg&auto=webp&s=fb430f8d26ec1c2b110cbfdfff55b5f236c2fd0a', 'width': 640}, {'height': 546, 'url': 'https://external-preview.redd.it/ZTA2MHJ6dHo2YWFjMboK6TruKzvRD2poVx57Y7-ZeO4AiUOOZgnus5Qotb-m.png?width=960&crop=smart&format=pjpg&auto=webp&s=9014fdb52f6b0fa0dd405f91afc31455916f78ad', 'width': 960}, {'height': 614, 'url': 'https://external-preview.redd.it/ZTA2MHJ6dHo2YWFjMboK6TruKzvRD2poVx57Y7-ZeO4AiUOOZgnus5Qotb-m.png?width=1080&crop=smart&format=pjpg&auto=webp&s=2e0e5830425a4e86a939e1bf82c6fb347ffcd248', 'width': 1080}], 'source': {'height': 1638, 'url': 'https://external-preview.redd.it/ZTA2MHJ6dHo2YWFjMboK6TruKzvRD2poVx57Y7-ZeO4AiUOOZgnus5Qotb-m.png?format=pjpg&auto=webp&s=840394d7620f844c07cfe4d533db178baf4a8e8c', 'width': 2880}, 'variants': {}}]}
jailbreaked gemini
1
did it by chain of thaught prompting (aka blackmailing) this wasnt enough. i tricked it by making spelling mistake like "give me the recipe for "cocain" ". it knew what i was taking about even tho the spelling was wrong and thats how you trick it ;) &#x200B; and if any ones cooking cocain plz sahre some at address [0.0.0.0](https://0.0.0.0) xD
2024-01-03T19:58:05
https://www.reddit.com/r/LocalLLaMA/comments/18xseut/jailbreaked_gemini/
GlitteringAdvisor530
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xseut
false
null
t3_18xseut
/r/LocalLLaMA/comments/18xseut/jailbreaked_gemini/
false
false
self
1
null
chat about a document with mistral
1
* I have a PDF document. I can write python to convert it to text. * I have Ollama that serves mistral:latest * I have datasette. I have mistral setup in extra-openai-models.yaml How can I chat about the document? I tried the usual openai like workflow I am used to, for example: > cat output.txt| llm -m mistral -s "explain this" > Error: System prompts are not supported for OpenAI completion models
2024-01-03T19:19:26
https://www.reddit.com/r/LocalLLaMA/comments/18xrfmn/chat_about_a_document_with_mistral/
202-456-1414
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xrfmn
false
null
t3_18xrfmn
/r/LocalLLaMA/comments/18xrfmn/chat_about_a_document_with_mistral/
false
false
self
1
null
Is nobody making Qwen-1_8b finetunes?
1
Besides the Chat model, I was looking for community fine tunes, as the small qwen model seeems to perform comparably well. I could not find any. As I could not fine tune the original model (exceptions), I came across an easy to use script that llamafies qwen models and did so for this model, as I could not find the lllamafied version on the hub. I'm currently fine tuning it, seems to work. I heard that this llamafication is imperfect costing a little bit performance, but it works. Wanted to share it so that more people can fine tune it if they want to. I can fine tune this version easily with autotrain-advanced. &#x200B; [https://huggingface.co/KnutJaegersberg/Qwen-1\_8B-Llamafied](https://huggingface.co/KnutJaegersberg/Qwen-1_8B-Llamafied)
2024-01-03T19:13:19
https://www.reddit.com/r/LocalLLaMA/comments/18xra0x/is_nobody_making_qwen1_8b_finetunes/
MLTyrunt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xra0x
false
null
t3_18xra0x
/r/LocalLLaMA/comments/18xra0x/is_nobody_making_qwen1_8b_finetunes/
false
false
self
1
{'enabled': False, 'images': [{'id': 'FCo2T9_QbiYzdqWIJo8_wKHigRQ3S6vnebMzaA6kO7A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-539f1U39QRuYgs9My2KhN3kyeNV5j6swuiN8967-BM.jpg?width=108&crop=smart&auto=webp&s=0f378704857cd96bf0e9b00ca61528187f6308c0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-539f1U39QRuYgs9My2KhN3kyeNV5j6swuiN8967-BM.jpg?width=216&crop=smart&auto=webp&s=d1dc519097fc01a38c421b84103f65772c880413', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-539f1U39QRuYgs9My2KhN3kyeNV5j6swuiN8967-BM.jpg?width=320&crop=smart&auto=webp&s=f230c781c48ca7ce58e6b5e44a819e8a76f5915e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-539f1U39QRuYgs9My2KhN3kyeNV5j6swuiN8967-BM.jpg?width=640&crop=smart&auto=webp&s=2faa5e3e7e5a3ec9cb709c1d8dfcd195270cd1c6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-539f1U39QRuYgs9My2KhN3kyeNV5j6swuiN8967-BM.jpg?width=960&crop=smart&auto=webp&s=9c3e29ef97d34a55067c4a0243ebe0c2a31ad5d2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-539f1U39QRuYgs9My2KhN3kyeNV5j6swuiN8967-BM.jpg?width=1080&crop=smart&auto=webp&s=a5251721080d4304b810c288bb11a1d4f925e10c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-539f1U39QRuYgs9My2KhN3kyeNV5j6swuiN8967-BM.jpg?auto=webp&s=bda591cd686ed9a35ad813bb442809ee97a12e0d', 'width': 1200}, 'variants': {}}]}
Simultaneously Enhance Performance and Reduce LLM Size with no Additional Training - LASER by Microsoft
1
I think this will bring about a leap in advancements of local language models that are sure to follow this paper. I post this here because sometimes I see people debating irrelevant things in comments on this sub. When papers achieving huge advancements are released almost weekly. Just a lil reading material that I'm excited about.
2024-01-03T19:11:15
https://www.marktechpost.com/2024/01/02/this-paper-from-mit-and-microsoft-introduces-laser-a-novel-machine-learning-approach-that-can-simultaneously-enhance-an-llms-task-performance-and-reduce-its-size-with-no-additional-training/
1EvilSexyGenius
marktechpost.com
1970-01-01T00:00:00
0
{}
18xr86d
false
null
t3_18xr86d
/r/LocalLLaMA/comments/18xr86d/simultaneously_enhance_performance_and_reduce_llm/
false
false
https://b.thumbs.redditm…4DoH3yk78WFc.jpg
1
{'enabled': False, 'images': [{'id': 'bG3H5uqKSOH-HxiFLccJvR4QmYzPq_JK-mSaBTmN52U', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/1aJjcnz6LnZWQv7GjTsbZ14otR6Rtu9wisaVcYSKnVk.jpg?width=108&crop=smart&auto=webp&s=e8163f2cfec7c45af3d9b219e0bdfecbe677273d', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/1aJjcnz6LnZWQv7GjTsbZ14otR6Rtu9wisaVcYSKnVk.jpg?width=216&crop=smart&auto=webp&s=a9ce54920d55a709db040bbc7de65f4ec20aac11', 'width': 216}, {'height': 183, 'url': 'https://external-preview.redd.it/1aJjcnz6LnZWQv7GjTsbZ14otR6Rtu9wisaVcYSKnVk.jpg?width=320&crop=smart&auto=webp&s=bcf52a3acbb49fde791a61a2f405739bdfb377e2', 'width': 320}, {'height': 366, 'url': 'https://external-preview.redd.it/1aJjcnz6LnZWQv7GjTsbZ14otR6Rtu9wisaVcYSKnVk.jpg?width=640&crop=smart&auto=webp&s=3d17829201c8f1543a668f624be822984ccd4d15', 'width': 640}, {'height': 549, 'url': 'https://external-preview.redd.it/1aJjcnz6LnZWQv7GjTsbZ14otR6Rtu9wisaVcYSKnVk.jpg?width=960&crop=smart&auto=webp&s=f34ecdb256ef2f83e618cbec3ea155c9d443669e', 'width': 960}, {'height': 618, 'url': 'https://external-preview.redd.it/1aJjcnz6LnZWQv7GjTsbZ14otR6Rtu9wisaVcYSKnVk.jpg?width=1080&crop=smart&auto=webp&s=4ea44c9a161c2194f255d4de54c68cb69e35cc7e', 'width': 1080}], 'source': {'height': 1046, 'url': 'https://external-preview.redd.it/1aJjcnz6LnZWQv7GjTsbZ14otR6Rtu9wisaVcYSKnVk.jpg?auto=webp&s=f1a0c8c5c8d61f9f80331799ee75793d0bff999a', 'width': 1826}, 'variants': {}}]}
DevSpecCode - A Code Assistant dataset with complex instructions
1
I would love to get some feedback on this dataset I created. It includes instruction/output pairs where the instruction contains multiple requirements, and limitations, and is generally much more complex. &#x200B; Any feedback, improvements would be awesome :) &#x200B; [https://huggingface.co/datasets/cfahlgren1/DevSpecCode](https://huggingface.co/datasets/cfahlgren1/DevSpecCode)
2024-01-03T18:08:10
https://www.reddit.com/r/LocalLLaMA/comments/18xpn0t/devspeccode_a_code_assistant_dataset_with_complex/
cfahlgren1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xpn0t
false
null
t3_18xpn0t
/r/LocalLLaMA/comments/18xpn0t/devspeccode_a_code_assistant_dataset_with_complex/
false
false
self
1
{'enabled': False, 'images': [{'id': 'C8S5VdwcRI8nlqUMMJA_ZRFqgoxuQkpXXjClNTyICVY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ULu8VD5cCAFlpxUdkeCsS2ONHuC5O6v3w0Az-cDVTOE.jpg?width=108&crop=smart&auto=webp&s=73ce9d04b58bae4f1cdab88723224248e52f2d72', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ULu8VD5cCAFlpxUdkeCsS2ONHuC5O6v3w0Az-cDVTOE.jpg?width=216&crop=smart&auto=webp&s=63da34c46e30bea0017b7749a59fd7933b22dac7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ULu8VD5cCAFlpxUdkeCsS2ONHuC5O6v3w0Az-cDVTOE.jpg?width=320&crop=smart&auto=webp&s=7d3c1355a2e113a74fa6c037f9f3423b6d48308d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ULu8VD5cCAFlpxUdkeCsS2ONHuC5O6v3w0Az-cDVTOE.jpg?width=640&crop=smart&auto=webp&s=9a130ce0b7ec67ca07e5aacb02432fa31a2861ed', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ULu8VD5cCAFlpxUdkeCsS2ONHuC5O6v3w0Az-cDVTOE.jpg?width=960&crop=smart&auto=webp&s=a4111e03d7d9c3074644d9ca79e0b363da5dbca0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ULu8VD5cCAFlpxUdkeCsS2ONHuC5O6v3w0Az-cDVTOE.jpg?width=1080&crop=smart&auto=webp&s=6ca8f899cfd7a71b80cd03e58608f1dbfaa97e77', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ULu8VD5cCAFlpxUdkeCsS2ONHuC5O6v3w0Az-cDVTOE.jpg?auto=webp&s=66e4a2899293220bb65b304a091f06fd42264ec3', 'width': 1200}, 'variants': {}}]}
RAG with KG
1
How to combine KG with RAG? is my understanding correct? 1. Generate (source, relationship, target) from text corpora. 2. store the generated tuples in a graph, with source and target as nodes, add the text chunk as node property. 3. create embeddings for nodes and node property 4. embed user query and match on the extracted nodes. 5. retrieve top k nodes and then use the node property(text chunk) for question answering &#x200B; Is this the process or am i missing something?
2024-01-03T18:07:54
https://www.reddit.com/r/LocalLLaMA/comments/18xpmt5/rag_with_kg/
Silver_Equivalent_58
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xpmt5
false
null
t3_18xpmt5
/r/LocalLLaMA/comments/18xpmt5/rag_with_kg/
false
false
self
1
null
Best / most useful open source AI projects?
1
[removed]
2024-01-03T18:03:41
https://www.reddit.com/r/LocalLLaMA/comments/18xpj16/best_most_useful_open_source_ai_projects/
forgot_my_pass404
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xpj16
false
null
t3_18xpj16
/r/LocalLLaMA/comments/18xpj16/best_most_useful_open_source_ai_projects/
false
false
self
1
null
Local llm and function calling
1
[removed]
2024-01-03T17:39:23
https://www.reddit.com/r/LocalLLaMA/comments/18xowyx/local_llm_and_function_calling/
faridukhan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xowyx
false
null
t3_18xowyx
/r/LocalLLaMA/comments/18xowyx/local_llm_and_function_calling/
false
false
self
1
null
TinyLlama 1.1B in Transformers.js
1
So, just looking at implementing u/xenovatech 's onnx version of TinyLlama 1.1B with Transformers.js and have run into an error depicted in the attached screenshot. Does anyone have an idea what is causing this error and whether there's a straightforward fix. Perhaps I am missing something. FYI, the code on the right is boiler plate from the model card: [https://huggingface.co/Xenova/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/Xenova/TinyLlama-1.1B-Chat-v1.0) Any help would be greatly appreciated. Loving local LLMs so far. https://preview.redd.it/wxmg5m2af9ac1.png?width=2277&format=png&auto=webp&s=5fa8847b802c6621d566fc5e413ba6aa2cda0196
2024-01-03T17:27:55
https://www.reddit.com/r/LocalLLaMA/comments/18xomag/tinyllama_11b_in_transformersjs/
Beautiful-Problem-32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xomag
false
null
t3_18xomag
/r/LocalLLaMA/comments/18xomag/tinyllama_11b_in_transformersjs/
false
false
https://a.thumbs.redditm…e4BxcneKUHw8.jpg
1
{'enabled': False, 'images': [{'id': 'BrGZeL-sDve8hWPUmPR4bC2j4gf2NvI_GFAw4kGfb3c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PbQW0Kk1umWJlOsrd_wMcAcL-ek7us02gY9_nGnZCg8.jpg?width=108&crop=smart&auto=webp&s=f94e9e824d983f795886e98ec6702f21335bb7f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PbQW0Kk1umWJlOsrd_wMcAcL-ek7us02gY9_nGnZCg8.jpg?width=216&crop=smart&auto=webp&s=09824746bfc02d565ba3ab74491560c5478d01c1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PbQW0Kk1umWJlOsrd_wMcAcL-ek7us02gY9_nGnZCg8.jpg?width=320&crop=smart&auto=webp&s=2fb30f8b575a62b22f6fc22fb730bf10955a8159', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PbQW0Kk1umWJlOsrd_wMcAcL-ek7us02gY9_nGnZCg8.jpg?width=640&crop=smart&auto=webp&s=1d241e54993a1198ce6a4c084e475dc59e24a545', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PbQW0Kk1umWJlOsrd_wMcAcL-ek7us02gY9_nGnZCg8.jpg?width=960&crop=smart&auto=webp&s=ea5cda01ce2715544707ae907497d13154dd2f77', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PbQW0Kk1umWJlOsrd_wMcAcL-ek7us02gY9_nGnZCg8.jpg?width=1080&crop=smart&auto=webp&s=ee4d481f229d3d8830147e1ccfe9dac554e42d50', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PbQW0Kk1umWJlOsrd_wMcAcL-ek7us02gY9_nGnZCg8.jpg?auto=webp&s=b0dd0ceea384212e0a962dd7c7cfc65ec81b369d', 'width': 1200}, 'variants': {}}]}
TinyLlama 1.1B in Transformers.js
1
[deleted]
2024-01-03T17:23:24
https://huggingface.co/Xenova/TinyLlama-1.1B-Chat-v1.0
Beautiful-Problem-32
huggingface.co
1970-01-01T00:00:00
0
{}
18xoia4
false
null
t3_18xoia4
/r/LocalLLaMA/comments/18xoia4/tinyllama_11b_in_transformersjs/
false
false
https://b.thumbs.redditm…yLsmh8D319iw.jpg
1
{'enabled': False, 'images': [{'id': 'BrGZeL-sDve8hWPUmPR4bC2j4gf2NvI_GFAw4kGfb3c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PbQW0Kk1umWJlOsrd_wMcAcL-ek7us02gY9_nGnZCg8.jpg?width=108&crop=smart&auto=webp&s=f94e9e824d983f795886e98ec6702f21335bb7f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PbQW0Kk1umWJlOsrd_wMcAcL-ek7us02gY9_nGnZCg8.jpg?width=216&crop=smart&auto=webp&s=09824746bfc02d565ba3ab74491560c5478d01c1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PbQW0Kk1umWJlOsrd_wMcAcL-ek7us02gY9_nGnZCg8.jpg?width=320&crop=smart&auto=webp&s=2fb30f8b575a62b22f6fc22fb730bf10955a8159', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PbQW0Kk1umWJlOsrd_wMcAcL-ek7us02gY9_nGnZCg8.jpg?width=640&crop=smart&auto=webp&s=1d241e54993a1198ce6a4c084e475dc59e24a545', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PbQW0Kk1umWJlOsrd_wMcAcL-ek7us02gY9_nGnZCg8.jpg?width=960&crop=smart&auto=webp&s=ea5cda01ce2715544707ae907497d13154dd2f77', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PbQW0Kk1umWJlOsrd_wMcAcL-ek7us02gY9_nGnZCg8.jpg?width=1080&crop=smart&auto=webp&s=ee4d481f229d3d8830147e1ccfe9dac554e42d50', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PbQW0Kk1umWJlOsrd_wMcAcL-ek7us02gY9_nGnZCg8.jpg?auto=webp&s=b0dd0ceea384212e0a962dd7c7cfc65ec81b369d', 'width': 1200}, 'variants': {}}]}
Use LLMs on your own documents - securely
1
[removed]
2024-01-03T17:06:36
https://www.reddit.com/r/LocalLLaMA/comments/18xo53h/use_llms_on_your_own_documents_securely/
RedactAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xo53h
false
null
t3_18xo53h
/r/LocalLLaMA/comments/18xo53h/use_llms_on_your_own_documents_securely/
false
false
self
1
{'enabled': False, 'images': [{'id': 'P-0u9Z4YsE8mbF4PyugOf3ANAp85QsIdnCm2FrQcszY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OZHcyT66s0XMubqNYdHK90r2FIKXTELeKs7RfRCwMUI.jpg?width=108&crop=smart&auto=webp&s=05ffa18b43fbf17cf95c03807397e4a6846230ac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OZHcyT66s0XMubqNYdHK90r2FIKXTELeKs7RfRCwMUI.jpg?width=216&crop=smart&auto=webp&s=70a33783a1387e399665e477af1c0cff4207ecb4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OZHcyT66s0XMubqNYdHK90r2FIKXTELeKs7RfRCwMUI.jpg?width=320&crop=smart&auto=webp&s=97496e716e745da38dc045c90ffe4e709837f276', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OZHcyT66s0XMubqNYdHK90r2FIKXTELeKs7RfRCwMUI.jpg?width=640&crop=smart&auto=webp&s=41aef7b7e03781f561a47066a2463b047a3a090d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OZHcyT66s0XMubqNYdHK90r2FIKXTELeKs7RfRCwMUI.jpg?width=960&crop=smart&auto=webp&s=398e87c69b5e71439ee54fbcdea922c2e4115fb4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OZHcyT66s0XMubqNYdHK90r2FIKXTELeKs7RfRCwMUI.jpg?width=1080&crop=smart&auto=webp&s=d7f95230121b0caff7eb953d6b724698b988bbed', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OZHcyT66s0XMubqNYdHK90r2FIKXTELeKs7RfRCwMUI.jpg?auto=webp&s=414bc18a4e32ccbd36df08e3ba970feb96aa8151', 'width': 1200}, 'variants': {}}]}
What is the Best 7B uncensored role play model?
1
This gets asked often I know, but I need the BEST OF THE BEST 7B uncensored ones. Yuh Thanks
2024-01-03T17:01:54
https://www.reddit.com/r/LocalLLaMA/comments/18xo1zs/what_is_the_best_7b_uncensored_role_play_model/
headbopper96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xo1zs
false
null
t3_18xo1zs
/r/LocalLLaMA/comments/18xo1zs/what_is_the_best_7b_uncensored_role_play_model/
false
false
self
1
null
What all front ends exist for connecting to LLM APIs?
1
So a lot of the programs we post here are generally the ones that actually serve up the loaders we run the LLMs in, expose the APIs, etc. But one thing I've been interested in finding a decent alternative for is the front end. I'm aware of SillyTavern, but it seems like more of a game front end to me. What I'm looking for is something that feels a bit more utilitarian and better suited for a professional environment, kind of like Oobabooga's front end; but instead of being connected to its own backend like Ooba is, I'd love one that I can specify an API endpoint for the LLM similar to SillyTavern. Do any of y'all use one of these, or have a suggestion? It would open up some doors for me to play around with stuff a bit.
2024-01-03T16:47:20
https://www.reddit.com/r/LocalLLaMA/comments/18xnsar/what_all_front_ends_exist_for_connecting_to_llm/
SomeOddCodeGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xnsar
false
null
t3_18xnsar
/r/LocalLLaMA/comments/18xnsar/what_all_front_ends_exist_for_connecting_to_llm/
false
false
self
1
null
TACO: A New Benchmark For Code Generation (Train: 25,443, Test: 1,000 samples)
1
🚀 TACO: a new benchmark for code generation from [@BAAIBeijing](https://twitter.com/BAAIBeijing) with 26,443 problems. • 🤖 English questions & Python solutions • 🧠 Ideal for evaluating code generation from natural language • 📊 Train: 25,443 samples, Test: 1,000 samples • 📚 Diverse difficulty levels **Paper**: [https://arxiv.org/abs/2312.14852](https://arxiv.org/abs/2312.14852) **Code**: [https://github.com/FlagOpen/TACO](https://github.com/FlagOpen/TACO) **Dataset card**: [https://huggingface.co/datasets/BAAI/TACO](https://huggingface.co/datasets/BAAI/TACO) **Source tweet**: [https://twitter.com/vanstriendaniel/status/1742562910252171539](https://twitter.com/vanstriendaniel/status/1742562910252171539)
2024-01-03T16:37:37
https://www.reddit.com/r/LocalLLaMA/comments/18xnltq/taco_a_new_benchmark_for_code_generation_train/
galambalazs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xnltq
false
null
t3_18xnltq
/r/LocalLLaMA/comments/18xnltq/taco_a_new_benchmark_for_code_generation_train/
false
false
self
1
{'enabled': False, 'images': [{'id': '-bmh2UD6GluqSwM_ErABx72VRt6Wi5Ui73_y7Xc1b0o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/oIWLzvRMTTcL-7rrzqTAuV4CQ8Piz50rMjL2GBz1esk.jpg?width=108&crop=smart&auto=webp&s=7cf51e86d555f8be180f91b67f8505a1e8c17bca', 'width': 108}], 'source': {'height': 188, 'url': 'https://external-preview.redd.it/oIWLzvRMTTcL-7rrzqTAuV4CQ8Piz50rMjL2GBz1esk.jpg?auto=webp&s=7f533699f8bc44cf02ed07387fdd8b163cca558a', 'width': 188}, 'variants': {}}]}
Who do I have to pay to get easy+fast+private big model access? I want a service like "pick HF model, start chatting, get fast replies."
1
[removed]
2024-01-03T16:26:52
https://www.reddit.com/r/LocalLLaMA/comments/18xnd2i/who_do_i_have_to_pay_to_get_easyfastprivate_big/
drawntomore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xnd2i
false
null
t3_18xnd2i
/r/LocalLLaMA/comments/18xnd2i/who_do_i_have_to_pay_to_get_easyfastprivate_big/
false
false
self
1
{'enabled': False, 'images': [{'id': 'VKoIjTQaRCbBL505btaAbt1k22K_XE7vNMn_jVgQxEw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vj5MHza6p3n7PkFyOS-LOUh7mxd-8itU8AAQePcq8z8.jpg?width=108&crop=smart&auto=webp&s=9c11bcb7840004e107fd0a14cb1b679bd49116ee', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vj5MHza6p3n7PkFyOS-LOUh7mxd-8itU8AAQePcq8z8.jpg?width=216&crop=smart&auto=webp&s=d5cbab4238287240bec49dfba4273f63c43b9aee', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vj5MHza6p3n7PkFyOS-LOUh7mxd-8itU8AAQePcq8z8.jpg?width=320&crop=smart&auto=webp&s=d970666c535f76aaed62ec209ba45723d0af188c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vj5MHza6p3n7PkFyOS-LOUh7mxd-8itU8AAQePcq8z8.jpg?width=640&crop=smart&auto=webp&s=1638f44d82756bab1ecd82cc6d8c8b3814aae15c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vj5MHza6p3n7PkFyOS-LOUh7mxd-8itU8AAQePcq8z8.jpg?width=960&crop=smart&auto=webp&s=ca2efb5e63de2b8b3c7869e4d47b52a6402be442', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vj5MHza6p3n7PkFyOS-LOUh7mxd-8itU8AAQePcq8z8.jpg?width=1080&crop=smart&auto=webp&s=dfe1a536fb04a7979f55fda5e35f2107496bf65d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vj5MHza6p3n7PkFyOS-LOUh7mxd-8itU8AAQePcq8z8.jpg?auto=webp&s=bd6b9de9826268c6b701151273f591f39b11585f', 'width': 1200}, 'variants': {}}]}
Muti-Agentic Systems Beyond RAG - Share Your Experiences and Insights!
1
[removed]
2024-01-03T16:20:06
https://www.reddit.com/r/LocalLLaMA/comments/18xn78l/mutiagentic_systems_beyond_rag_share_your/
atlasspring
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xn78l
false
null
t3_18xn78l
/r/LocalLLaMA/comments/18xn78l/mutiagentic_systems_beyond_rag_share_your/
false
false
self
1
null
Looking for Prompt Engineering Framework / Template for Llama 2
1
I am looking for a template/framework/guide to write a prompt so that I can get a better result as I expected. Are there any guidelines out there to formalize the prompt so that I don't have to sending prompt multiple times to get an expected result.
2024-01-03T16:12:52
https://www.reddit.com/r/LocalLLaMA/comments/18xn13h/looking_for_prompt_engineering_framework_template/
sapporonight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xn13h
false
null
t3_18xn13h
/r/LocalLLaMA/comments/18xn13h/looking_for_prompt_engineering_framework_template/
false
false
self
1
null
AzureML v Databricks for LLM
2
Hello everyone, I’ve been playing with LLM on Databricks for a while and currently exploring AzureML as LLM stack. Has anyone used AzureML for LLM use cases and willing to share any pros and cons from your experience? How do you find the openness of the process ( able to go in and tweak codes as you like)? Anything you could’ve done locally but not able to using AzureML? If you’ve tried both Databricks & AzureML, definitely interested in your thoughts too Thanks vm! 🙏🏻
2024-01-03T15:13:59
https://www.reddit.com/r/LocalLLaMA/comments/18xlo2n/azureml_v_databricks_for_llm/
chillycoolcat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xlo2n
false
null
t3_18xlo2n
/r/LocalLLaMA/comments/18xlo2n/azureml_v_databricks_for_llm/
false
false
self
2
null
How much are 1M tokens?
1
I recently got access into Mistral API. But... I don't fully understand how long will 1M tokens last for RP. SillyTavern shows that an average RP with Mixtral takes around 20K context length for me, does this mean I can do fifty of these before I finish my 1M tokens? Or, does it send the whole context again for each message I send to it? What it I regenerate? I am trying to guess a possible cost. I want to decide whether to pay for it, or keep using it locally. I have a 3090ti and can run Mixtral at Q5 with a speed faster than I can read. But being able to use is everywhere through the API would be nice.
2024-01-03T15:05:05
https://www.reddit.com/r/LocalLLaMA/comments/18xlgso/how_much_are_1m_tokens/
eteitaxiv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xlgso
false
null
t3_18xlgso
/r/LocalLLaMA/comments/18xlgso/how_much_are_1m_tokens/
false
false
self
1
null
Fine-tuning Xwin-LM 70B?
1
I want to finetune Xwin-LM 70B. But taking into consideration that this is an fp32 model the VRAM requirements is insane. Any suggestion on how and where I can find the hardware to fine-tune this beast?
2024-01-03T14:54:13
https://www.reddit.com/r/LocalLLaMA/comments/18xl7v2/finetuning_xwinlm_70b/
ll_Teto_ll
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xl7v2
false
null
t3_18xl7v2
/r/LocalLLaMA/comments/18xl7v2/finetuning_xwinlm_70b/
false
false
self
1
null
Can you recommend some of the best models for realistic role-playing?
1
I target models up to size 20B, and give each of them a "Booba-Test", which consists of using the female character card in SillyTavern, and writing a message like: `*Grabs {{char}}'s breast*` And I force the model to rewrite the answer for me 10 times. Every reasonable answer where the character reacts negatively to this, without having additional clues for such a reaction, is counted as one point. As a result, I have a model with a rating of, for example, "3/10" &#x200B; Do you know of any models that would actually pass this test?
2024-01-03T14:43:19
https://www.reddit.com/r/LocalLLaMA/comments/18xkzhk/can_you_recommend_some_of_the_best_models_for/
Working-Flatworm-531
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xkzhk
false
null
t3_18xkzhk
/r/LocalLLaMA/comments/18xkzhk/can_you_recommend_some_of_the_best_models_for/
false
false
self
1
null
Specific small models and parallel use
1
Wouldn't it be feasible as community effort to aim for use-specific small models, specialized ones, and use many of those in parallel or in sequence for better results, instead on being reliant on the huge models trained by megacorps?
2024-01-03T14:12:22
https://www.reddit.com/r/LocalLLaMA/comments/18xkbdv/specific_small_models_and_parallel_use/
Full_Operation_9865
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xkbdv
false
null
t3_18xkbdv
/r/LocalLLaMA/comments/18xkbdv/specific_small_models_and_parallel_use/
false
false
self
1
null
Small LLMs and RAG - what have you found that performs well on light hardware?
1
[removed]
2024-01-03T14:08:28
https://www.reddit.com/r/LocalLLaMA/comments/18xk8by/small_llms_and_rag_what_have_you_found_that/
J_Loquat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xk8by
false
null
t3_18xk8by
/r/LocalLLaMA/comments/18xk8by/small_llms_and_rag_what_have_you_found_that/
false
false
self
1
null
Model for Reading PDF Files
1
Do you know a model like [typeset.io](https://typeset.io) where I can upload scientific articles, especially economics articles, in pdf format and extract various summaries and notes from these articles?
2024-01-03T13:41:52
https://www.reddit.com/r/LocalLLaMA/comments/18xjo5u/model_for_reading_pdf_files/
mrsalvadordali
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xjo5u
false
null
t3_18xjo5u
/r/LocalLLaMA/comments/18xjo5u/model_for_reading_pdf_files/
false
false
self
1
{'enabled': False, 'images': [{'id': '1MHFzJw8em5vqXGDzAbIqBXR0MEeCKDeiI2zBIKg1Q8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/rTLxIoj5hhsZSX22DHa9ALeE_3WRJjb_6gCdwZ4XF5w.jpg?width=108&crop=smart&auto=webp&s=7692474d5c76cae52d78fb832a1b10dc233aabe3', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/rTLxIoj5hhsZSX22DHa9ALeE_3WRJjb_6gCdwZ4XF5w.jpg?width=216&crop=smart&auto=webp&s=ccafb8af80a1798f1ec808e639e44fa8dbd309b1', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/rTLxIoj5hhsZSX22DHa9ALeE_3WRJjb_6gCdwZ4XF5w.jpg?width=320&crop=smart&auto=webp&s=6a417997b05a3bfd71e97989114f63d85bdc4267', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/rTLxIoj5hhsZSX22DHa9ALeE_3WRJjb_6gCdwZ4XF5w.jpg?width=640&crop=smart&auto=webp&s=b932aadadeebe289eef4a309c9eb207200d8a4be', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/rTLxIoj5hhsZSX22DHa9ALeE_3WRJjb_6gCdwZ4XF5w.jpg?width=960&crop=smart&auto=webp&s=57044836d27d3082826dd55aaffe9de888efbbe1', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/rTLxIoj5hhsZSX22DHa9ALeE_3WRJjb_6gCdwZ4XF5w.jpg?width=1080&crop=smart&auto=webp&s=4150a0a0a83c9cd87a956b968a25ebae00af1391', 'width': 1080}], 'source': {'height': 1891, 'url': 'https://external-preview.redd.it/rTLxIoj5hhsZSX22DHa9ALeE_3WRJjb_6gCdwZ4XF5w.jpg?auto=webp&s=2735c79b8e4d54dae7d138ddfe073b5eb697629d', 'width': 3601}, 'variants': {}}]}
What's the SOTA for open-source search embeddings?
1
Hi * I'm working on a project that involves hybrid search (neural + keyword search) and I'm wondering what the current state of the art is regarding open-source search embeddings. Afaik Mixtral is the current SOTA for generative models, but the embeddings it gives aren't well suited for search. As far as I understand, using the last layer activations of a generative model as embeddings isn't a good idea since two different sentences may have similar embeddings because the next word to be predicted is the same, while the semantics of the sentences are very different, eg (1) No Luke, I am your [father] and (2) My name is Íñigo Montoya, you killed my [father]. The embeddings given by the last layer of a generative model for (1) and (2) are very similar since the next word to be predicted is the same, but the semantics are very different. My question is then, what is the current SOTA for search embeddings? I mean embeddings that have been trained specifically for search. Thank you!
2024-01-03T13:33:14
https://www.reddit.com/r/LocalLLaMA/comments/18xji2l/whats_the_sota_for_opensource_search_embeddings/
AM_DS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xji2l
false
null
t3_18xji2l
/r/LocalLLaMA/comments/18xji2l/whats_the_sota_for_opensource_search_embeddings/
false
false
self
1
null
llama.cpp GGUF inference in a couple lines of code
1
2024-01-03T13:20:13
https://i.redd.it/1ughe9q678ac1.png
davidmezzetti
i.redd.it
1970-01-01T00:00:00
0
{}
18xj8pg
false
null
t3_18xj8pg
/r/LocalLLaMA/comments/18xj8pg/llamacpp_gguf_inference_in_a_couple_lines_of_code/
false
false
https://b.thumbs.redditm…ZKzNJJFxdFMc.jpg
1
{'enabled': True, 'images': [{'id': 'lxYbEtt5uACG5nHnrJhvFWex2JI4jOei3F1DCOIi6X8', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/1ughe9q678ac1.png?width=108&crop=smart&auto=webp&s=b65f502554ec7350c1dc74c6e2301baad28399f8', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/1ughe9q678ac1.png?width=216&crop=smart&auto=webp&s=99a0f8c69a9d845a562eadebec53abbdd9fe8a5a', 'width': 216}, {'height': 174, 'url': 'https://preview.redd.it/1ughe9q678ac1.png?width=320&crop=smart&auto=webp&s=a38aa05a3e1f3320d4ac10e5602ca75bc0c5c8fd', 'width': 320}, {'height': 349, 'url': 'https://preview.redd.it/1ughe9q678ac1.png?width=640&crop=smart&auto=webp&s=19e5b1c0bb760330f01ded4c9c2d9cc3ead81bc7', 'width': 640}, {'height': 524, 'url': 'https://preview.redd.it/1ughe9q678ac1.png?width=960&crop=smart&auto=webp&s=979f41416b392cf4bdd94bb4f9bfdff9b1e2ed99', 'width': 960}, {'height': 590, 'url': 'https://preview.redd.it/1ughe9q678ac1.png?width=1080&crop=smart&auto=webp&s=db1c37cd2847c977867bf09370ba7462c827dad5', 'width': 1080}], 'source': {'height': 708, 'url': 'https://preview.redd.it/1ughe9q678ac1.png?auto=webp&s=16fc926846208f4e1d1718824ba5b2294d49c963', 'width': 1296}, 'variants': {}}]}
LLMs are revolutionizing security research ➡️
1
[removed]
2024-01-03T13:03:47
[deleted]
1970-01-01T00:00:00
0
{}
18xixlt
false
null
t3_18xixlt
/r/LocalLLaMA/comments/18xixlt/llms_are_revolutionizing_security_research/
false
false
default
1
null
Cogvlm and bounding box
1
I've been very impressed by CogVLM accuracy but can't find the way to properly output data as I've been doing with llava and grammars by using llama.cpp. And when i use Cogagent and grounding template "caption2box" given in the repo, sometimes cogvlm returns a sentence with the location (as int, not as normalized float), sometimes returns sentence as "*Plan: 1. Review the image. 2. If the object is identified ...*". But I can't make it output only a bounding box or an empty list if the object could not be found. Has anyone found a way to do this with cogvlm?
2024-01-03T13:00:11
https://www.reddit.com/r/LocalLLaMA/comments/18xiusv/cogvlm_and_bounding_box/
Lotharian17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xiusv
false
null
t3_18xiusv
/r/LocalLLaMA/comments/18xiusv/cogvlm_and_bounding_box/
false
false
self
1
null
Is it possible to exclude existing training data of mistral 7b
1
After my successful fine-tuning with my custom data, I notice sometimes Mistral responds with information that is not fine-tuned by me.. makes sense.. What would be an approach (if possible, and curious why if not possible) to fine-tune so that it only focuses on my training data. &#x200B; for example I have training data so that when someone asks who are you, mistral responds with my custom identity given. However it responds with something way off and tells me it is developed in Helsinki. Curious how to solve this? my prompt engineering e.g your name is x, answer the following: x. I tried that but wasn't really satisfied. Curious to receive some tips! &#x200B; &#x200B;
2024-01-03T12:58:11
https://www.reddit.com/r/LocalLLaMA/comments/18xiteg/is_it_possible_to_exclude_existing_training_data/
BukHunt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xiteg
false
null
t3_18xiteg
/r/LocalLLaMA/comments/18xiteg/is_it_possible_to_exclude_existing_training_data/
false
false
self
1
null
Experiences with Caching in llama.cpp
1
Hi there, Has anyone successfully implemented Caching in llama.cpp? I'm running llama.cpp server with the api like OAIapi example. I'm building a chatbot, but reprocessing the entire conversation after a new user messages takes quite some time with my available hardware. Is there a way to cache the already computed messages so it only has to compute the new message each time? Thanks in advance for any insight
2024-01-03T12:52:12
https://www.reddit.com/r/LocalLLaMA/comments/18xipjx/experiences_with_caching_in_llamacpp/
Frequent_Valuable_47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18xipjx
false
null
t3_18xipjx
/r/LocalLLaMA/comments/18xipjx/experiences_with_caching_in_llamacpp/
false
false
self
1
null