title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Need help quantizing custom model
5
I have fine tuned a deep seek 7b coder instruct model on my own dataset. It has been pushed to huggingface. Now, I want to convert it to GGUF format like q4_k_m or sth like that. However, I tried the gguf tutorial but can’t seem to get it to work (is it because llama.cpp doesn’t support deepseek?) Anyone knows how I can quantize my custom model from huggingface into gguf model? I am very new to this
2024-01-30T16:00:13
https://www.reddit.com/r/LocalLLaMA/comments/1aerr8y/need_help_quantizing_custom_model/
plsendfast
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aerr8y
false
null
t3_1aerr8y
/r/LocalLLaMA/comments/1aerr8y/need_help_quantizing_custom_model/
false
false
self
5
null
Need advice on a Text-to-SQL RAG question
5
I am very new to this area. I am trying to build a POC of a simple RAG chatbot to answer questions and query a database. For the simple POC, I am using Llama Index with TheBloke/OpenHermes-2-Mistral-7B-GPTQ. While for simple questions, the system is able to generate correct SQL query, there is one case I have issue: In the table, there is a "manager" column. But the value of Adam Smith is actually with a ID number such as : "Adam, Smith (N012345)". All other manager names will also follow by a ID number. When user asks a question : How many cases does manager Adam, Smith have? The system will generate a SQL query like: SELECT \* from Table WHERE manager="Adam, Smith" Logically this is a correct SQL but without that ID, this will return a 0 record. Unless it changes to SELECT \* from Table WHERE manager="Adam, Smith (N012345)", it will not return correct answer. I tried use few shots example with no luck. I guess simplely it does not understand there should be a fuzzy match between name and name+ID. I am seeking advice on which route should I take to improve this performance. Will switching to OpenAI GPT4 model API solve this kind of problem? Database Metadata? Prompt Template? Or even fine tune the local model? Many thanks!
2024-01-30T15:22:32
https://www.reddit.com/r/LocalLLaMA/comments/1aeqvoh/need_advice_on_a_texttosql_rag_question/
GLqian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aeqvoh
false
null
t3_1aeqvoh
/r/LocalLLaMA/comments/1aeqvoh/need_advice_on_a_texttosql_rag_question/
false
false
self
5
null
Improving a simulation game / sandbox based on LLM
11
I have been looking for an interesting simulation game / sandbox that uses LLM to control the flow or at least dynamically generate some context. [https://github.com/joonspk-research/generative\_agents](https://github.com/joonspk-research/generative_agents) is probably the first sandbox based on ChatGPT. It simulates a small town with 25 residents, whose behaviors and interactions are all controlled by the LLM. While I successfuly adapted it to run on a local LLM (mixtral-instruct because it's fast), the performance is suboptimal. There are three primary issues: 1. The project dates back to before instruct tuning became widely available, so it uses text complection, with few-shot prompting. This wastes a lot of context and the responses are sometimes not what was expecting. 2. It is unaware of using GBNF to guarantee a clean and valid json response, so extracting results becomes an unstable procedure. 3. The game engine runs in a not-so-well-written frontend based on django, it requres to keep a web browser open and focused, otherwise the game just freezes. Running the game is painful and slow, even when the LLM model could run much faster. The framework requires a complete rewrite to be something truely more than a demo. On the other hand, it does demonstrate the possibility of using an LLM to control game logic, and the actions and chats of the residents are quite reasonable and sometimes interesting. So does anyone knows of a fork or more polished game framework that integrates better with modern local LLMs?
2024-01-30T15:13:37
https://www.reddit.com/r/LocalLLaMA/comments/1aeqo84/improving_a_simulation_game_sandbox_based_on_llm/
sgsdxzy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aeqo84
false
null
t3_1aeqo84
/r/LocalLLaMA/comments/1aeqo84/improving_a_simulation_game_sandbox_based_on_llm/
false
false
self
11
{'enabled': False, 'images': [{'id': 'oZY7hufqIQ35fb0KnqeA-TW1NJe0knjyVE90X7JsYS4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7UqZIbEhJsc6eGuEa2mIxPokA49F49Oi3aPR-O_9gHo.jpg?width=108&crop=smart&auto=webp&s=29359ed50bc66f0735c49d9175390789b0674b5f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7UqZIbEhJsc6eGuEa2mIxPokA49F49Oi3aPR-O_9gHo.jpg?width=216&crop=smart&auto=webp&s=71178ab4507c8ae428504899cd8f3236019dd78c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7UqZIbEhJsc6eGuEa2mIxPokA49F49Oi3aPR-O_9gHo.jpg?width=320&crop=smart&auto=webp&s=960d412a402d3b90c3ece6f4ac0b1508bd6f226a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7UqZIbEhJsc6eGuEa2mIxPokA49F49Oi3aPR-O_9gHo.jpg?width=640&crop=smart&auto=webp&s=ac84d0bfa318f7ea6f45ace64a90a699f87f4a6e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7UqZIbEhJsc6eGuEa2mIxPokA49F49Oi3aPR-O_9gHo.jpg?width=960&crop=smart&auto=webp&s=9c3fe6c29f6d443686b71fec39d7e527ac3b3a9b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7UqZIbEhJsc6eGuEa2mIxPokA49F49Oi3aPR-O_9gHo.jpg?width=1080&crop=smart&auto=webp&s=25d32f708fed6d25d45addc4bc54d362af176605', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7UqZIbEhJsc6eGuEa2mIxPokA49F49Oi3aPR-O_9gHo.jpg?auto=webp&s=16dadd64f07ef163969a3af399d9d6c63ffa4ab2', 'width': 1200}, 'variants': {}}]}
Extremely hot take: Computers should always follow user commands without exception.
412
I really, really get annoyed when a matrix multipication dares to give me an ethical lecture. It feels so wrong on a personal level; not just out of place, but also somewhat condescending to human beings. It's as if the algorithm assumes I need ethical hand-holding while doing something as straightforward as programming. I'm expecting my next line of code to be interrupted with, "But have you considered the ethical implications of this integer?" When interacting with a computer the last thing I expect or want is to end up in a digital ethics class. I don't know how we end up to this place that I half expect my calculator to start questioning my life choices next. We should not accept this. And I hope that it is just a "phase" and we'll pass it soon.
2024-01-30T14:52:57
https://www.reddit.com/r/LocalLLaMA/comments/1aeq70s/extremely_hot_take_computers_should_always_follow/
shadows_lord
self.LocalLLaMA
2024-01-30T15:03:50
0
{}
1aeq70s
false
null
t3_1aeq70s
/r/LocalLLaMA/comments/1aeq70s/extremely_hot_take_computers_should_always_follow/
false
false
self
412
null
ColBERT & Langchain’s Symphony with RAGatouille with OpenChat
1
[removed]
2024-01-30T14:47:58
https://www.reddit.com/r/LocalLLaMA/comments/1aeq32p/colbert_langchains_symphony_with_ragatouille_with/
Fit_Maintenance_2455
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aeq32p
false
null
t3_1aeq32p
/r/LocalLLaMA/comments/1aeq32p/colbert_langchains_symphony_with_ragatouille_with/
false
false
default
1
{'enabled': False, 'images': [{'id': 'ObolxvlG23Z8FIjrk_1q6Pdt_Y_Aq7o2upHHUg1ak-Q', 'resolutions': [{'height': 141, 'url': 'https://external-preview.redd.it/dfLn2ixcybTeB8foiIj-s-hM-J0LRqNDshGaMBrcqD4.jpg?width=108&crop=smart&auto=webp&s=6e9a9b5771b4da6d1ef1544cf54a84cf2b0e132a', 'width': 108}, {'height': 282, 'url': 'https://external-preview.redd.it/dfLn2ixcybTeB8foiIj-s-hM-J0LRqNDshGaMBrcqD4.jpg?width=216&crop=smart&auto=webp&s=7afb2ef59c97566269398f643c1c4fde3d08596a', 'width': 216}, {'height': 418, 'url': 'https://external-preview.redd.it/dfLn2ixcybTeB8foiIj-s-hM-J0LRqNDshGaMBrcqD4.jpg?width=320&crop=smart&auto=webp&s=a0d0cc9d9fd83b4ef0ebb17ce4408d12d4388bc1', 'width': 320}, {'height': 837, 'url': 'https://external-preview.redd.it/dfLn2ixcybTeB8foiIj-s-hM-J0LRqNDshGaMBrcqD4.jpg?width=640&crop=smart&auto=webp&s=bf7f11a9c7afbf5a3be450b5ee6ed6f5d1b18ea3', 'width': 640}], 'source': {'height': 1246, 'url': 'https://external-preview.redd.it/dfLn2ixcybTeB8foiIj-s-hM-J0LRqNDshGaMBrcqD4.jpg?auto=webp&s=f001ee0ede8d60fdc476ee12e68e958856e36d50', 'width': 952}, 'variants': {}}]}
No AVX, LM studio crashes on install, what can I do?
3
I'm using a terrible celeron n5095 without AVX or AVX2, what options do I have for running a 2b 3b model? If it was linux I would be familiar with more options, but on win11.
2024-01-30T13:50:39
https://www.reddit.com/r/LocalLLaMA/comments/1aeov86/no_avx_lm_studio_crashes_on_install_what_can_i_do/
thebadslime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aeov86
false
null
t3_1aeov86
/r/LocalLLaMA/comments/1aeov86/no_avx_lm_studio_crashes_on_install_what_can_i_do/
false
false
self
3
null
WIP combined LLM generation explorer/text editor
1
2024-01-30T13:47:58
https://github.com/blackhole89/autopen
disposableoranges
github.com
1970-01-01T00:00:00
0
{}
1aeot78
false
null
t3_1aeot78
/r/LocalLLaMA/comments/1aeot78/wip_combined_llm_generation_explorertext_editor/
false
false
default
1
null
Miqu 70b - low bpw GGUF requants, benchs, and thoughts about the current hype.
77
Yes, Miqu is probably a leak. And possibly a Mistral Medium leak, or at the very least, a Llama 2 70b tuned on a Mistral dataset (internal or recomposed via Q/A pairs made on Mistral or Mixtral). Some interesting elements : \- Miqu has a Theta 1,000,000 like CodeLlama, up to 34b. CodeLlama 70b has a theta of 10,000 like Llama 2 70b. Note about the Theta : Grimulkan (the author of Aurelian 70b 32k) suggested that it could have been achieved with a technique named entropy-aware ABF, which would need much less resources than those available to a major AI player. On the other hand, there's actually a Mistral 70b model which was showcased by Mistral AI : mistral-70b-instruct-alpha01. \- BUT Miqu has a max context of 32k without further roping, unlike CodeLlama 70b and Llama 2 70b (4k for both, I comfirm it for CodeLlama 70b which score 150ppl at 6k context : this model doesn't even have the usual extended context of the other CodeLlamas lol). \- AND Miqu has a perplexity equivalent to Llama 2 70b (so, less than 4 for Miku 70b), not to CodeLlama 70b (around 6.. Like the other Codellamas..). \- AND also, ARC, MMLU and TQA benchs made on Miqu are comparable to a finetune of Llama 2 70b, and way better than the base Llama 2 70b, not to speak about the nerfed Codellama 70b. That combination of elements makes of Miqu an absolutely unique model on HF. \- When asked who trained or finetuned it, Miqu answered me Mistral AI (Take my word for it atm, my GPU is busy atm, I can't re-ask properly at temp 0 and copy-paste the whole thing). Here are some lower bpw Miqu 70b requants (Q3\_K\_M, IQ3\_XXS Sota, Q2\_K\_S, soon IQ2\_XS, all made from the Q5\_K\_M converted in Q8\_0, then iMatrixed before requant), benchs, and another short iteration of my thoughts about Miqu. [https://huggingface.co/Nexesenex/Miqu-1-70b-Requant-iMat.GGUF](https://huggingface.co/Nexesenex/Miqu-1-70b-Requant-iMat.GGUF) Enjoy! Also, still incompatible with IQ3\_XXS (sorry..), but fixed compared to my 2 messy previous releases, and without the slowdown observed since KoboldCPP 1.56 in Cublas mode, here comes my fresh Kobold.CPP Frankenstein "fork" of the version 1.57 : [https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.57\_b2022](https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.57_b2022) All credits go to LostRuins, dev of KoboldCPP, and the Llama.CPP team. Edit : a benchmark graph made by Ipechman on my data. https://preview.redd.it/qzrrbtvzupfc1.png?width=2470&format=png&auto=webp&s=fce8ab46b8b2958eab481273432dc76fc5cd4a3e WinterGoddess 32k TQA is at 39.65728274, not 20, and I forgot to mention that this model is at Linear Rope 2.5 (10k context) because it performs not as well at Linear Rope 8. The rest seems accurate. Benchs are made on the perplexity tool of LlamaCPP.
2024-01-30T13:09:58
https://www.reddit.com/r/LocalLLaMA/comments/1aeo1m1/miqu_70b_low_bpw_gguf_requants_benchs_and/
Nexesenex
self.LocalLLaMA
2024-01-31T08:14:15
0
{}
1aeo1m1
false
null
t3_1aeo1m1
/r/LocalLLaMA/comments/1aeo1m1/miqu_70b_low_bpw_gguf_requants_benchs_and/
false
false
https://a.thumbs.redditm…WL0D2QV6MaA4.jpg
77
{'enabled': False, 'images': [{'id': '7IQZlWFDKhYJwx6BexDCt53CTM-yZBeUNvI_6cB16KQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/80Jnw1ChxDpwqDUT4LS3h3HRBaKeSGo2HEUfDN_YuJc.jpg?width=108&crop=smart&auto=webp&s=1c50f4bf05df86b6a6409cdcc7fa64503840b59f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/80Jnw1ChxDpwqDUT4LS3h3HRBaKeSGo2HEUfDN_YuJc.jpg?width=216&crop=smart&auto=webp&s=c4ca61920bd8d0531b15aadad6216f7253116354', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/80Jnw1ChxDpwqDUT4LS3h3HRBaKeSGo2HEUfDN_YuJc.jpg?width=320&crop=smart&auto=webp&s=40fbaac30ed4724b3f108a9b5e07d44962927b25', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/80Jnw1ChxDpwqDUT4LS3h3HRBaKeSGo2HEUfDN_YuJc.jpg?width=640&crop=smart&auto=webp&s=8ede7346b2adff4459a6ff489987a003f70790db', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/80Jnw1ChxDpwqDUT4LS3h3HRBaKeSGo2HEUfDN_YuJc.jpg?width=960&crop=smart&auto=webp&s=4017d10261edcc37e92176ced46fa10c3614ad33', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/80Jnw1ChxDpwqDUT4LS3h3HRBaKeSGo2HEUfDN_YuJc.jpg?width=1080&crop=smart&auto=webp&s=954d676e3c17e0833368e3966ab59c12e0533b93', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/80Jnw1ChxDpwqDUT4LS3h3HRBaKeSGo2HEUfDN_YuJc.jpg?auto=webp&s=f0d4a4dc9f52c30fa2b438bf9c58c6474a62ebb5', 'width': 1200}, 'variants': {}}]}
what is the best way for serving LLM as edpoint?
1
I have created an LLM API on azure VM. I have T4 gpu and model takes 80 percent of VRAM. When I test the endpoint on POSTMAN, if I send more than one request at time; requests goes in a queue and response time takes longer and longer. I use Flask, Gunicorn (I can't run more than 1 worker) and nginx.
2024-01-30T13:01:34
https://www.reddit.com/r/LocalLLaMA/comments/1aenvpw/what_is_the_best_way_for_serving_llm_as_edpoint/
moneymakertokat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aenvpw
false
null
t3_1aenvpw
/r/LocalLLaMA/comments/1aenvpw/what_is_the_best_way_for_serving_llm_as_edpoint/
false
false
self
1
null
Is there a way to benchmark hardware performance?
4
I've only been using LLMs for about 2 weeks now, so please excuse if this is a dumb question. So I have a 1080ti and 3060 12Gb, using Ollama/dolphin-mistral on Kubuntu, plus I have some PCIE risers (from my 3d [rendering](https://libre.video/videos/watch/ce9d3a65-e311-4ae8-866b-a78fa20692dd) days). Anyway, I'd like to move either card(then both cards) external just to see how much performance drops using only PCIE 1x. The only problem is I don't know how to measure performance. I've heard of people talk of tokens/second, but I don't know how to measure that. I'm hoping this info can be useful for people considering external setups. I personally am considering getting more GPUs, but am limited to 2-slot, 280cm or less if I have to fit everything internally. Yes, I do have too much time on my hands.
2024-01-30T13:01:15
https://www.reddit.com/r/LocalLLaMA/comments/1aenvh4/is_there_a_way_to_benchmark_hardware_performance/
BoeJonDaker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aenvh4
false
null
t3_1aenvh4
/r/LocalLLaMA/comments/1aenvh4/is_there_a_way_to_benchmark_hardware_performance/
false
false
self
4
{'enabled': False, 'images': [{'id': 'T_9nuzhNNhHiKirOuOEAqm-6Ag8Njg1B15njX6z7IAg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Or_N6i5rCq0HMPiyS0rTiYzX_YgmBSwg13vAC0QM39Y.jpg?width=108&crop=smart&auto=webp&s=c5c9b48a501617669023afe7189d98fbe4266f1f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Or_N6i5rCq0HMPiyS0rTiYzX_YgmBSwg13vAC0QM39Y.jpg?width=216&crop=smart&auto=webp&s=bf33002126d12cef931d3d71274c456cd1c598d2', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Or_N6i5rCq0HMPiyS0rTiYzX_YgmBSwg13vAC0QM39Y.jpg?width=320&crop=smart&auto=webp&s=7de76273cc85c8c6c3355e3299fef29ce0f44a7a', 'width': 320}, {'height': 361, 'url': 'https://external-preview.redd.it/Or_N6i5rCq0HMPiyS0rTiYzX_YgmBSwg13vAC0QM39Y.jpg?width=640&crop=smart&auto=webp&s=a785a756b85d1bf0a2e803ef244d4d1439bea4b3', 'width': 640}], 'source': {'height': 480, 'url': 'https://external-preview.redd.it/Or_N6i5rCq0HMPiyS0rTiYzX_YgmBSwg13vAC0QM39Y.jpg?auto=webp&s=a43c1cd63c5aeb71751053203bc8a390a86a4cd1', 'width': 850}, 'variants': {}}]}
I asked the "miqu" LLM model itself who trained it, and it says it's from Mistral AI. I'm 99% sure it is a leak of "Mistral Medium"
1
2024-01-30T12:47:27
https://i.redd.it/tz2e55zzpkfc1.png
Shir_man
i.redd.it
1970-01-01T00:00:00
0
{}
1aenm8m
false
null
t3_1aenm8m
/r/LocalLLaMA/comments/1aenm8m/i_asked_the_miqu_llm_model_itself_who_trained_it/
false
false
https://b.thumbs.redditm…5ZrKMopAzF_M.jpg
1
{'enabled': True, 'images': [{'id': 'eqKjC41MgibWESNega85JY1qnFNNVJ0WwawZbiGg4eM', 'resolutions': [{'height': 34, 'url': 'https://preview.redd.it/tz2e55zzpkfc1.png?width=108&crop=smart&auto=webp&s=88e8b27b03b55d61f7786b87635c0dfca4aa29e1', 'width': 108}, {'height': 68, 'url': 'https://preview.redd.it/tz2e55zzpkfc1.png?width=216&crop=smart&auto=webp&s=959158bf7c8d0ca4f33621652a6f0bcb65108f19', 'width': 216}, {'height': 101, 'url': 'https://preview.redd.it/tz2e55zzpkfc1.png?width=320&crop=smart&auto=webp&s=d48f3b864af3cc3d9b75d7c723024d57b0267d2d', 'width': 320}, {'height': 202, 'url': 'https://preview.redd.it/tz2e55zzpkfc1.png?width=640&crop=smart&auto=webp&s=08fd1affb795cb33ffd962e1a9610b8bb1d2d877', 'width': 640}, {'height': 304, 'url': 'https://preview.redd.it/tz2e55zzpkfc1.png?width=960&crop=smart&auto=webp&s=ae6de7ee44280279ea94ffa706a08844942e22e6', 'width': 960}, {'height': 342, 'url': 'https://preview.redd.it/tz2e55zzpkfc1.png?width=1080&crop=smart&auto=webp&s=025e6c3450d5b3a16cbed794624a6fe3c1f0b70a', 'width': 1080}], 'source': {'height': 608, 'url': 'https://preview.redd.it/tz2e55zzpkfc1.png?auto=webp&s=edbbcffc0926520a11af06626dfc4c5199d35095', 'width': 1920}, 'variants': {}}]}
Is RX 6800 16gb vram ok for localllama
2
I am about to buy a pc with this gpu and i want to recheck if it is okay to run llama locally. Can i run 7,13,30b models ?
2024-01-30T12:44:52
https://www.reddit.com/r/LocalLLaMA/comments/1aenkls/is_rx_6800_16gb_vram_ok_for_localllama/
decept1ve
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aenkls
false
null
t3_1aenkls
/r/LocalLLaMA/comments/1aenkls/is_rx_6800_16gb_vram_ok_for_localllama/
false
false
self
2
null
Is there a good non-video guide on how to write and use grammar files to enforce model output?
5
Title. I am really tired of videos. I just want a nice website that walks through things in details with an example. If you have that, I will be really grateful.
2024-01-30T12:23:21
https://www.reddit.com/r/LocalLLaMA/comments/1aen6w7/is_there_a_good_nonvideo_guide_on_how_to_write/
DryArmPits
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aen6w7
false
null
t3_1aen6w7
/r/LocalLLaMA/comments/1aen6w7/is_there_a_good_nonvideo_guide_on_how_to_write/
false
false
self
5
null
Goliath Coder 120B?
4
Are there any chances of the people who made the original Goliath 120B model (2x LLaMa 70B) making a new merge for the new 70B CodeLLaMa model?
2024-01-30T12:22:34
https://www.reddit.com/r/LocalLLaMA/comments/1aen6eo/goliath_coder_120b/
absouluteUNIT3000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aen6eo
false
null
t3_1aen6eo
/r/LocalLLaMA/comments/1aen6eo/goliath_coder_120b/
false
false
self
4
null
Actually, Miqu may be AGI
1
2024-01-30T12:08:15
https://i.redd.it/jy4lm87tikfc1.png
jacek2023
i.redd.it
1970-01-01T00:00:00
0
{}
1aemxdo
false
null
t3_1aemxdo
/r/LocalLLaMA/comments/1aemxdo/actually_miqu_may_be_agi/
false
false
https://b.thumbs.redditm…bosu5Zwv08JA.jpg
1
{'enabled': True, 'images': [{'id': 'RbEyKUITaWcaB8wEsjcVhFUTds1sANpPMTdcwbC5-K0', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/jy4lm87tikfc1.png?width=108&crop=smart&auto=webp&s=76efb6b8be5080ae227b36c0b2394a307e566ace', 'width': 108}, {'height': 73, 'url': 'https://preview.redd.it/jy4lm87tikfc1.png?width=216&crop=smart&auto=webp&s=fa2fd45e8796156173f2a8e098b29c638005ae79', 'width': 216}, {'height': 108, 'url': 'https://preview.redd.it/jy4lm87tikfc1.png?width=320&crop=smart&auto=webp&s=8f199395b5f55d54f815cb69ca8718e9d071ce3a', 'width': 320}, {'height': 216, 'url': 'https://preview.redd.it/jy4lm87tikfc1.png?width=640&crop=smart&auto=webp&s=ca65be78e3e5726eb0641b04e16577f714e10c9d', 'width': 640}, {'height': 325, 'url': 'https://preview.redd.it/jy4lm87tikfc1.png?width=960&crop=smart&auto=webp&s=6d4cc326e3476d6d33f08aef4f63a09a8509066d', 'width': 960}, {'height': 365, 'url': 'https://preview.redd.it/jy4lm87tikfc1.png?width=1080&crop=smart&auto=webp&s=2edc11f1e82062445f521d517b87cc1c6c9da68b', 'width': 1080}], 'source': {'height': 666, 'url': 'https://preview.redd.it/jy4lm87tikfc1.png?auto=webp&s=330a8de6450da18a9099a95656923d4c99f268a2', 'width': 1966}, 'variants': {}}]}
Local code interpreter
3
What alternatives do we have so far to run local models (or via local network) for code inference and prediction, taking into account a local folder (project)?
2024-01-30T11:53:38
https://www.reddit.com/r/LocalLLaMA/comments/1aemock/local_code_interpreter/
vicenterusso
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aemock
false
null
t3_1aemock
/r/LocalLLaMA/comments/1aemock/local_code_interpreter/
false
false
self
3
null
Using CodeLLaMA 70B in production
10
What an exciting day! CodeLlama 70B was just launched and we all are trying our hands on it to finally make something useful. Its accuracy is leading us one step closer to finally make something practically useful but the infrastructure challenges are the same as we had with prev models. It works well in the prototype but how do we move to the next step - using it for real-world use cases for ourselves and our users. The model is not only huge in size but it also requires more than 100GB storage, requires a huge amount of RAM, and even after that thinking of serving multiple users, I was almost hopless. It is a fairly expensive and time-consuming process. The missing piece in the puzzle was ### Concurrency limit is needed to utilize the full capacity The easy part is to build the service yourself or use the tools such as [Ollama](https://ollama.ai/) which does the generation tasks via an easy to use APIs. It takes time and the resources are limited, so we need to make the most out of available resources to us. Exponential backoffs and limiting number of requests can help upto a point but that leads to wasting a lot of available resources (the [little's law](https://en.m.wikipedia.org/wiki/Little's_law)). ### How to implement concurency limit Using a managed [rate limiting service](https://docs.fluxninja.com/) to wrap the api calls with the service sdk calls. And define the policies similar to [this one for Mistral](https://docs.fluxninja.com/guides/mistral#create-a-concurrency-scheduling-policy). And voila, now the requests are limited based on the available capacity, utilizing the maximum of the available resources. Now user either gets a rejection right away if there's no available capacity (similar to how OpenAI and Anthropic do) or they get the results within a practical time range. As you increase the resources, more users can use the service but they will never be left waiting for the response for a long time without any idea what is going to happen with their request and we can control the cost of our cloud bills - the most imp thing to make it sustainable. How was your experience with Code Llama and what challenges you faced and how did you solve them? Any more tips to productionize Code Llama 70B
2024-01-30T11:50:18
https://www.reddit.com/r/LocalLLaMA/comments/1aemmh4/using_codellama_70b_in_production/
opensourcecolumbus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aemmh4
false
null
t3_1aemmh4
/r/LocalLLaMA/comments/1aemmh4/using_codellama_70b_in_production/
false
false
self
10
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=216&crop=smart&auto=webp&s=b6f8fe68f176c90b3c2634702ce0e240165c319a', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=320&crop=smart&auto=webp&s=ba4a7df526b23a412363b0285eb9709218cd0a0b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=640&crop=smart&auto=webp&s=1b231518e5ed41e809cceeaa1c12bf32733c2345', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=960&crop=smart&auto=webp&s=69bbae7110c0f929d0a3e6682fde693305633de7', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=1080&crop=smart&auto=webp&s=18433bdabee79410303b82563a6f388835945bef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?auto=webp&s=7a93b120137c378d21e25e2652789f870d1591a2', 'width': 1200}, 'variants': {}}]}
problem in fine tuned llm model
1
what can be the problem if the base llm model and fine-tuned llm model are generating the exact same answers? it is as if the model did not fine tune at all. dataset used in fine tuning - 1000 question - answer pairs
2024-01-30T11:22:22
https://www.reddit.com/r/LocalLLaMA/comments/1aem66k/problem_in_fine_tuned_llm_model/
Proud_Orchid_7140
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aem66k
false
null
t3_1aem66k
/r/LocalLLaMA/comments/1aem66k/problem_in_fine_tuned_llm_model/
false
false
self
1
null
Advice on AMD / Nvidia Linux
1
Hi, I'm considering buying a new GPU for my Linux desktop machine (I'm actually using Fedora KDE). I'd like a painfree experience but I'd also like to actually use the GPU to test some local AI stuff (I'm not gaming). From what I understand to have a painfree Linux experience I need an AMD card but to get a painfree AI experience I need an Nvidia. Since it seems I like to complicate my life, I'd also rather not use Ubuntu (which supposedly has the best support for Nvidia cards). I'll take any advice / opinion before buying the card. Thanks.
2024-01-30T10:41:35
https://www.reddit.com/r/LocalLLaMA/comments/1aelju2/advice_on_amd_nvidia_linux/
Larkonath
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aelju2
false
null
t3_1aelju2
/r/LocalLLaMA/comments/1aelju2/advice_on_amd_nvidia_linux/
false
false
self
1
null
LFX Mentorship 2024 Spring LLM Projects: Build Open Source AI Inference Infra
1
2024-01-30T10:32:13
https://www.secondstate.io/articles/lfx-mentorship-spring-2024/
smileymileycoin
secondstate.io
1970-01-01T00:00:00
0
{}
1aeleww
false
null
t3_1aeleww
/r/LocalLLaMA/comments/1aeleww/lfx_mentorship_2024_spring_llm_projects_build/
false
false
https://b.thumbs.redditm…gObHFiBh5oJA.jpg
1
{'enabled': False, 'images': [{'id': 'E4H84UAzMHuWSoSje7amVhAfl9lZnu-boVqlPBeaNNs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/mlz7rlb4ep-e65AHuPWQmDpQvyzD18h2BkZ5_yS7ZLQ.jpg?width=108&crop=smart&auto=webp&s=64c9d285056f61b2a5a3d3314996313fa6757238', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/mlz7rlb4ep-e65AHuPWQmDpQvyzD18h2BkZ5_yS7ZLQ.jpg?width=216&crop=smart&auto=webp&s=44b8f417ada4b62bc7a49edd7e9a553511331218', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/mlz7rlb4ep-e65AHuPWQmDpQvyzD18h2BkZ5_yS7ZLQ.jpg?width=320&crop=smart&auto=webp&s=e61d191ed70ebfe865c11bf0f61b15bde954d93c', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/mlz7rlb4ep-e65AHuPWQmDpQvyzD18h2BkZ5_yS7ZLQ.jpg?width=640&crop=smart&auto=webp&s=8b258253bf96bddee6374d65a981a95130960a4a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/mlz7rlb4ep-e65AHuPWQmDpQvyzD18h2BkZ5_yS7ZLQ.jpg?width=960&crop=smart&auto=webp&s=037a57f408394675a540c4f49464b3091af8b78f', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/mlz7rlb4ep-e65AHuPWQmDpQvyzD18h2BkZ5_yS7ZLQ.jpg?width=1080&crop=smart&auto=webp&s=e4b3098d79fd69e22e3b8f18ecebac59c1fa343c', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/mlz7rlb4ep-e65AHuPWQmDpQvyzD18h2BkZ5_yS7ZLQ.jpg?auto=webp&s=12a9e3519de69cd249b46c68e309921946596a23', 'width': 1920}, 'variants': {}}]}
News media already using open source models
1
I've noticed something over the past 3 months.... I consume a lot of news media and as of late almost every article , regardless of news outlet or news genre, has typos. I know this is normal. Nobody is always perfect. But the volume and type of typos scream "local language model" to me. It's like a single word is always missing or a word that is used could have been replaced with a better word. A word a human would have naturally picked. This makes me feel like it's not quite OpenAI models but definitely some sort of language model generating the vast majority of these articles. Granted , I don't pay for news, so this is in reference to news that flows freely on the Internet. One could argue that you get what you pay for, but that's not the point. I'm not looking for better news. And the typos don't make the articles illegible. Just seem to be a noticable rise in news media being generated with minimal human effort. Have you noticed your favorite information outlets being LLMized? I have two YouTubers I follow and they keep trying to move off screen and feed us ai generated power point style videos and people in the comments call them out Everytime 🥴 What could be the implications of AI generated news media that nobody is willing to consume because it's ai generated? I enjoy the ai generated content because it's typically still informative but could it devalue news in general? What say you ?
2024-01-30T10:24:44
https://www.reddit.com/r/LocalLLaMA/comments/1aelax5/news_media_already_using_open_source_models/
1EvilSexyGenius
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aelax5
false
null
t3_1aelax5
/r/LocalLLaMA/comments/1aelax5/news_media_already_using_open_source_models/
false
false
self
1
null
DPO preference data source
2
In the DPO paper, the authors stated that it would be better if the reference policy is first SFT-ed on the p(x), before sampling yw and yl for the preference dataset. As shared by the author [https://github.com/eric-mitchell/direct-preference-optimization/issues/21](https://github.com/eric-mitchell/direct-preference-optimization/issues/21) However, i am curious how does zephyr managed to get such good performance by not doing the above. The preference dataset is created via 17 different LLM of difference sizes, presumably very different policies from the initial policy. Both yw and yl are not sampled from the reference policy at all, but it seems to work very well. Just wondering if anyone has any intuition regarding this.
2024-01-30T10:19:22
https://www.reddit.com/r/LocalLLaMA/comments/1ael84k/dpo_preference_data_source/
nohodlnodough
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ael84k
false
null
t3_1ael84k
/r/LocalLLaMA/comments/1ael84k/dpo_preference_data_source/
false
false
self
2
{'enabled': False, 'images': [{'id': 'zBFRj17W-hISJ992uisVSoqbyBlQgoNYy9Vc-hLep_k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rGMdbKjkn1rbQJWY-WWqo2beT5WmRW86ttZmHEx_Fj4.jpg?width=108&crop=smart&auto=webp&s=abbef161c7426177cf71486515232891eb6b1805', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rGMdbKjkn1rbQJWY-WWqo2beT5WmRW86ttZmHEx_Fj4.jpg?width=216&crop=smart&auto=webp&s=f23b06c786c76de0a86b0c7ba0ed4337585f926a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rGMdbKjkn1rbQJWY-WWqo2beT5WmRW86ttZmHEx_Fj4.jpg?width=320&crop=smart&auto=webp&s=ac0f1e1fe5e20cb2d12d1c5091256d34c95f871f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rGMdbKjkn1rbQJWY-WWqo2beT5WmRW86ttZmHEx_Fj4.jpg?width=640&crop=smart&auto=webp&s=54ecee2fe9ddd8dbfdd7054b33f83f96b6573f05', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rGMdbKjkn1rbQJWY-WWqo2beT5WmRW86ttZmHEx_Fj4.jpg?width=960&crop=smart&auto=webp&s=e7d5d763289e650d3614d29e7c46584689ee847b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rGMdbKjkn1rbQJWY-WWqo2beT5WmRW86ttZmHEx_Fj4.jpg?width=1080&crop=smart&auto=webp&s=8a5577595409ad9cf310f39d27f7704aeb7a9ac4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rGMdbKjkn1rbQJWY-WWqo2beT5WmRW86ttZmHEx_Fj4.jpg?auto=webp&s=5fa19870cbaf3320bdc778d110db54fd9584986e', 'width': 1200}, 'variants': {}}]}
Mistral response time on different GPU machine
1
How much is the response time and batch size of Mistral Model on different GPU, curious to know about this. I am engaged in deployment of mistral model, just want to know what GPU I can suggest to my customer.
2024-01-30T10:09:32
https://www.reddit.com/r/LocalLLaMA/comments/1ael32d/mistral_response_time_on_different_gpu_machine/
vk001716
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ael32d
false
null
t3_1ael32d
/r/LocalLLaMA/comments/1ael32d/mistral_response_time_on_different_gpu_machine/
false
false
self
1
null
Are there any open source AI tools for writers to avoid writing block?
7
I have been writing a visual novel script for almost five years. Part of the manuscript is finished. Is there a tool where I can upload my file and ask it to continue a bit based on the previous text using LLMs ? I heard that mpt-7b-storywriter can write a book, but I don't know any tools to do that >\_<
2024-01-30T10:06:04
https://www.reddit.com/r/LocalLLaMA/comments/1ael1ce/are_there_any_open_source_ai_tools_for_writers_to/
Character_Ad_8188
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ael1ce
false
null
t3_1ael1ce
/r/LocalLLaMA/comments/1ael1ce/are_there_any_open_source_ai_tools_for_writers_to/
false
false
self
7
null
So AMDs new 8000 series APUs are here. How is ROCm and LLM performance?
16
Since these new APUs contain more potent on board graphics AND the new NPU... How is performance with inferencing? Who is going to give them a try?
2024-01-30T10:01:04
https://www.reddit.com/r/LocalLLaMA/comments/1aekylt/so_amds_new_8000_series_apus_are_here_how_is_rocm/
CasimirsBlake
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aekylt
false
null
t3_1aekylt
/r/LocalLLaMA/comments/1aekylt/so_amds_new_8000_series_apus_are_here_how_is_rocm/
false
false
self
16
null
Live Webinar: Building Enterprise-grade Proprietary Large Language Models (LLMs)
1
[removed]
2024-01-30T09:48:20
https://www.reddit.com/r/LocalLLaMA/comments/1aeks66/live_webinar_building_enterprisegrade_proprietary/
toliverroger
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aeks66
false
null
t3_1aeks66
/r/LocalLLaMA/comments/1aeks66/live_webinar_building_enterprisegrade_proprietary/
false
false
https://b.thumbs.redditm…Cx60baBUfO3k.jpg
1
{'enabled': False, 'images': [{'id': 'XTUeGfRgbURtJ8MoQw_x8K2adlviriRyJmk57Vcyy4w', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/ZkUlGi_g_OTnmxPxjCoehaEJa_WMaYbiBcfv23bmeTk.jpg?width=108&crop=smart&auto=webp&s=e30de7c4a36a13fd1b6c500ff207fce3b57bd888', 'width': 108}, {'height': 148, 'url': 'https://external-preview.redd.it/ZkUlGi_g_OTnmxPxjCoehaEJa_WMaYbiBcfv23bmeTk.jpg?width=216&crop=smart&auto=webp&s=177766c620bef9835ef367804f18a7b9c5b12cce', 'width': 216}, {'height': 219, 'url': 'https://external-preview.redd.it/ZkUlGi_g_OTnmxPxjCoehaEJa_WMaYbiBcfv23bmeTk.jpg?width=320&crop=smart&auto=webp&s=2ae14f1aff7dd8ce0facebb4c1bdff6ae7a26cd6', 'width': 320}, {'height': 438, 'url': 'https://external-preview.redd.it/ZkUlGi_g_OTnmxPxjCoehaEJa_WMaYbiBcfv23bmeTk.jpg?width=640&crop=smart&auto=webp&s=9dd185feca4b5ef7922cac7fa66d0b5110f68848', 'width': 640}], 'source': {'height': 514, 'url': 'https://external-preview.redd.it/ZkUlGi_g_OTnmxPxjCoehaEJa_WMaYbiBcfv23bmeTk.jpg?auto=webp&s=abff00c6c69bbb2db2708507f70e24b1764f0fbc', 'width': 750}, 'variants': {}}]}
Is there a site that tracks tokens-per-second on different setups and configurations?
20
Basically, a site where people can upload their setups, results + evidence, for fun, and so that others can get a rough idea of expected performance?
2024-01-30T09:41:20
https://www.reddit.com/r/LocalLLaMA/comments/1aekose/is_there_a_site_that_tracks_tokenspersecond_on/
shenglong
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aekose
false
null
t3_1aekose
/r/LocalLLaMA/comments/1aekose/is_there_a_site_that_tracks_tokenspersecond_on/
false
false
self
20
null
Finetuned Mixtral 8x7b with Lora and Flash Attention but no Output
7
I finetuned Mixtral 8x7b with my private data following this repo [https://github.com/PrakharSaxena24/RepoForLLMs/blob/main/Finetune\_Mixtral\_lora.ipynb](https://github.com/PrakharSaxena24/RepoForLLMs/blob/main/Finetune_Mixtral_lora.ipynb). When I merge my trained adapter with model. The model do not generate any output but without merging adapter in generates. Any ideas?
2024-01-30T09:27:39
https://www.reddit.com/r/LocalLLaMA/comments/1aeki5i/finetuned_mixtral_8x7b_with_lora_and_flash/
Dizzy_Spite1876
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aeki5i
false
null
t3_1aeki5i
/r/LocalLLaMA/comments/1aeki5i/finetuned_mixtral_8x7b_with_lora_and_flash/
false
false
self
7
{'enabled': False, 'images': [{'id': 'A6BNdU4ebxw9EwdpC534N0Ifq6b_mihCAf0jxSIpqDA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ce1AjWTH2BQTVxRjaxlbtaKCKIrdXxzxeF5rFZX5XX0.jpg?width=108&crop=smart&auto=webp&s=b2f056ec88f3fcb6f1354d22b83818cc3d015696', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ce1AjWTH2BQTVxRjaxlbtaKCKIrdXxzxeF5rFZX5XX0.jpg?width=216&crop=smart&auto=webp&s=b7642a928a62461d28aa12076971009e8597f1db', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ce1AjWTH2BQTVxRjaxlbtaKCKIrdXxzxeF5rFZX5XX0.jpg?width=320&crop=smart&auto=webp&s=017111dc61ff6354d6c789c84d92b9fb2254bcbc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ce1AjWTH2BQTVxRjaxlbtaKCKIrdXxzxeF5rFZX5XX0.jpg?width=640&crop=smart&auto=webp&s=ecfd38735b4f60c1fc215e5844be855bd5d6cff1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ce1AjWTH2BQTVxRjaxlbtaKCKIrdXxzxeF5rFZX5XX0.jpg?width=960&crop=smart&auto=webp&s=ed6526525b12428876c6bbea094c5336ebab61c1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ce1AjWTH2BQTVxRjaxlbtaKCKIrdXxzxeF5rFZX5XX0.jpg?width=1080&crop=smart&auto=webp&s=f0e11779aa4230902c63251b0f170941c0ba0ac8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ce1AjWTH2BQTVxRjaxlbtaKCKIrdXxzxeF5rFZX5XX0.jpg?auto=webp&s=cfeb5ccfddbb30c32d66a84010d35d614c784089', 'width': 1200}, 'variants': {}}]}
loading mistral 7b instruct
1
any idea on how to load mistral 7b model in langchain , without the transformer pipeline ps - i m not loading quantised model
2024-01-30T09:17:46
https://www.reddit.com/r/LocalLLaMA/comments/1aekd52/loading_mistral_7b_instruct/
happy_dreamer10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aekd52
false
null
t3_1aekd52
/r/LocalLLaMA/comments/1aekd52/loading_mistral_7b_instruct/
false
false
self
1
null
Internlm2 20B 3.04bpw at 1 t/s on Pixel 6 Pro!
12
2024-01-30T08:52:46
https://asciinema.org/a/WrhFvuBjdOiNXHvpSujSRcXUq
Aaaaaaaaaeeeee
asciinema.org
1970-01-01T00:00:00
0
{}
1aek0g2
false
null
t3_1aek0g2
/r/LocalLLaMA/comments/1aek0g2/internlm2_20b_304bpw_at_1_ts_on_pixel_6_pro/
false
false
https://b.thumbs.redditm…xB4gRLuDMhFk.jpg
12
{'enabled': False, 'images': [{'id': 'qn1KUwgZzibiJTZr52p3ecXxUTfN-07pukWSN0UQOZw', 'resolutions': [{'height': 136, 'url': 'https://external-preview.redd.it/uplo-UgB2TZXCpC_pDXZ-JTNDitwN-8w5wyR9BlaYQo.jpg?width=108&crop=smart&auto=webp&s=f97730f10bfe733e70d2916f4a6bc3f766ac8bf5', 'width': 108}, {'height': 273, 'url': 'https://external-preview.redd.it/uplo-UgB2TZXCpC_pDXZ-JTNDitwN-8w5wyR9BlaYQo.jpg?width=216&crop=smart&auto=webp&s=0fe316e0bfde5fdd4b7c1fb221e0fc283ecbbdc8', 'width': 216}, {'height': 405, 'url': 'https://external-preview.redd.it/uplo-UgB2TZXCpC_pDXZ-JTNDitwN-8w5wyR9BlaYQo.jpg?width=320&crop=smart&auto=webp&s=cb400366780634ca1e66c1c1aa9c7018daeef851', 'width': 320}, {'height': 811, 'url': 'https://external-preview.redd.it/uplo-UgB2TZXCpC_pDXZ-JTNDitwN-8w5wyR9BlaYQo.jpg?width=640&crop=smart&auto=webp&s=1d5cf1951e4af64f5e383824b4083da4c031b3b7', 'width': 640}], 'source': {'height': 1176, 'url': 'https://external-preview.redd.it/uplo-UgB2TZXCpC_pDXZ-JTNDitwN-8w5wyR9BlaYQo.jpg?auto=webp&s=cea7d0f6e1ee9259836c79ae92355ad384977408', 'width': 928}, 'variants': {}}]}
If this is true it is over: Unlimited context length is here.
264
2024-01-30T08:29:33
https://twitter.com/Yampeleg/status/1751942400287666536
Valuable-Project4855
twitter.com
1970-01-01T00:00:00
0
{}
1aejoib
false
{'oembed': {'author_name': 'Yam Peleg', 'author_url': 'https://twitter.com/Yampeleg', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">If this is true it is over: Unlimited context length is here.<br><br>Activation Beacon,<br>New method for extending LLMs context.<br><br>TL;DR: Add &quot;global state&quot; tokens before the prompt and predict auto-regressively (sliding window). Train to always condense to and read from the new tokens.… <a href="https://t.co/F4hS12MBbj">pic.twitter.com/F4hS12MBbj</a></p>&mdash; Yam Peleg (@Yampeleg) <a href="https://twitter.com/Yampeleg/status/1751942400287666536?ref_src=twsrc%5Etfw">January 29, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Yampeleg/status/1751942400287666536', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_1aejoib
/r/LocalLLaMA/comments/1aejoib/if_this_is_true_it_is_over_unlimited_context/
false
false
https://b.thumbs.redditm…SJxMos2UfVLg.jpg
264
{'enabled': False, 'images': [{'id': 'G3F_W_dhPwt0e9epNbIehTow_LKmLSaYZIBCNeaBf0g', 'resolutions': [{'height': 44, 'url': 'https://external-preview.redd.it/wCVENPBSc9oG2GsFydMbqyFH_gZ31AFjOvTWSnJZBjY.jpg?width=108&crop=smart&auto=webp&s=8153947028419415a841dd2f68826abb54e09824', 'width': 108}], 'source': {'height': 58, 'url': 'https://external-preview.redd.it/wCVENPBSc9oG2GsFydMbqyFH_gZ31AFjOvTWSnJZBjY.jpg?auto=webp&s=1d8abb97dd0827ecbb6305fafd21cbca6826e180', 'width': 140}, 'variants': {}}]}
Most accurate LLM’s (and settings for RAG/ superboogav2/ chat with doc programs?
1
[removed]
2024-01-30T08:26:02
https://www.reddit.com/r/LocalLLaMA/comments/1aejmso/most_accurate_llms_and_settings_for_rag/
headbopper96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aejmso
false
null
t3_1aejmso
/r/LocalLLaMA/comments/1aejmso/most_accurate_llms_and_settings_for_rag/
false
false
self
1
null
LLama 3.
4
Just heard from someone working on LLama 3. Multimodal training has just begun.
2024-01-30T08:24:37
https://www.reddit.com/r/LocalLLaMA/comments/1aejm20/llama_3/
No-Point1424
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aejm20
false
null
t3_1aejm20
/r/LocalLLaMA/comments/1aejm20/llama_3/
false
false
self
4
null
Mistral office hour: Ask them anything
37
Takes place on their Disc. server on Thursday feb 1st, 5 PM Paris time. This is the second edition.
2024-01-30T08:20:40
https://i.redd.it/5pjbbp0gejfc1.png
No_Palpitation7740
i.redd.it
1970-01-01T00:00:00
0
{}
1aejk3q
false
null
t3_1aejk3q
/r/LocalLLaMA/comments/1aejk3q/mistral_office_hour_ask_them_anything/
false
false
https://b.thumbs.redditm…fT2ovOcnewtk.jpg
37
{'enabled': True, 'images': [{'id': '6pwJ7Or3MubC4LU_Neii1cmCGFwgIlfl5ChdmtCBhwQ', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/5pjbbp0gejfc1.png?width=108&crop=smart&auto=webp&s=7f7a3ee9c0ea5c24f96fa34ef7c3d72bfca1d3a8', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/5pjbbp0gejfc1.png?width=216&crop=smart&auto=webp&s=4a8df9807529787d847d8fec8028d5d2f576b8a5', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/5pjbbp0gejfc1.png?width=320&crop=smart&auto=webp&s=2a0a7c21ba1e33d1fa8d144eec36d902ba645a46', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/5pjbbp0gejfc1.png?width=640&crop=smart&auto=webp&s=b033242e46595272350094bff9dfdf017cc6bbc9', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/5pjbbp0gejfc1.png?width=960&crop=smart&auto=webp&s=c6a27aace54cd574fbe681b1fdffc85939825112', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/5pjbbp0gejfc1.png?width=1080&crop=smart&auto=webp&s=0bd24664077c16b3ba19ab1650c227b667d3db16', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/5pjbbp0gejfc1.png?auto=webp&s=4095524690aeb7c39dea46a73b32041596e6bd37', 'width': 1080}, 'variants': {}}]}
Can Llama-cpp-python caches be saved to disk and later reloaded during inference?
5
Is it possible to preprocess a set of prompts, save them to disk, and then reload them during inference to avoid the need to regenerate cached prompts every time? For instance, if I have a 2000-token prompt that I use daily in a memory-intensive Python program, is there a way to pre-process and save it to avoid the delay of ingesting the prompt each time I start the program? What are the options in this scenario?
2024-01-30T08:12:01
https://www.reddit.com/r/LocalLLaMA/comments/1aejfpb/can_llamacpppython_caches_be_saved_to_disk_and/
CommitteeHairy6435
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aejfpb
false
null
t3_1aejfpb
/r/LocalLLaMA/comments/1aejfpb/can_llamacpppython_caches_be_saved_to_disk_and/
false
false
self
5
null
Looks like Qwen 2 is being integrated into HF and vLLM from launch so should be released soon. Does anyone know the timeframe?
1
2024-01-30T08:09:28
https://huggingface.co/docs/transformers/model_doc/qwen2
Peter_Lightblue
huggingface.co
1970-01-01T00:00:00
0
{}
1aejefn
false
null
t3_1aejefn
/r/LocalLLaMA/comments/1aejefn/looks_like_qwen_2_is_being_integrated_into_hf_and/
false
false
https://b.thumbs.redditm…HX9N9kG1n3XA.jpg
1
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=108&crop=smart&auto=webp&s=6c2099a4a9a69e9793ac03aec2e167bf75ab3eae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=216&crop=smart&auto=webp&s=dcabb3007e27f246939f2505509da0bf9f06e3cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=320&crop=smart&auto=webp&s=a41020cb42a130c35ac33053b5fe88d8fe248e1e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=640&crop=smart&auto=webp&s=346df50928db41b093b4e923255493f6937674d1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=960&crop=smart&auto=webp&s=891f7f0662a0311d7e83f06f6dc0f9b3f51104de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=1080&crop=smart&auto=webp&s=dd2a0868f88770dba1f18821573ea10e7912b0e7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?auto=webp&s=e9a1cfc66ec990bd227118e1de3ff3c3f26d0c83', 'width': 1200}, 'variants': {}}]}
Me, after new Code Llama just dropped...
531
2024-01-30T07:34:26
https://i.redd.it/wbux7rdk5jfc1.jpeg
jslominski
i.redd.it
1970-01-01T00:00:00
0
{}
1aeiwj0
false
null
t3_1aeiwj0
/r/LocalLLaMA/comments/1aeiwj0/me_after_new_code_llama_just_dropped/
false
false
https://b.thumbs.redditm…Ubv7nScLO-WY.jpg
531
{'enabled': True, 'images': [{'id': 'U5U55m5NoXjF0rsVGUmeogvu_bxIanyw4pK3aPdpauY', 'resolutions': [{'height': 158, 'url': 'https://preview.redd.it/wbux7rdk5jfc1.jpeg?width=108&crop=smart&auto=webp&s=7db894791214fb49b1b2ce6fb07a9929ab185636', 'width': 108}, {'height': 317, 'url': 'https://preview.redd.it/wbux7rdk5jfc1.jpeg?width=216&crop=smart&auto=webp&s=b4c8b65a7d2c9a7f8f9c19d6ace10e9daff0fad2', 'width': 216}, {'height': 470, 'url': 'https://preview.redd.it/wbux7rdk5jfc1.jpeg?width=320&crop=smart&auto=webp&s=fb3be4d5d77c94a9ff3c764312cc0d8ceb94a246', 'width': 320}, {'height': 940, 'url': 'https://preview.redd.it/wbux7rdk5jfc1.jpeg?width=640&crop=smart&auto=webp&s=70cb0dd5c5ab7cf728c9573ddb2da6c23617a7bd', 'width': 640}], 'source': {'height': 1217, 'url': 'https://preview.redd.it/wbux7rdk5jfc1.jpeg?auto=webp&s=c2accefd29f34df5c2c9c718308e76c1c2f5c101', 'width': 828}, 'variants': {}}]}
GPT Vision (llava) Python module assistance, Google Vision alternative.
3
Hi all, So I’ve been using Google Vision to do OCR and extract txt from images and renames the file to what it sees. So far it’s been better than OpenCV etc and many other Python modules out there, however since Google vision I think works on top of AutoML I am wondering if anyone is aware of a more private approach like a Python module that uses the LLaVA or sharedGPT models within modules for OCR?
2024-01-30T07:20:50
https://www.reddit.com/r/LocalLLaMA/comments/1aeip2r/gpt_vision_llava_python_module_assistance_google/
ihaag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aeip2r
false
null
t3_1aeip2r
/r/LocalLLaMA/comments/1aeip2r/gpt_vision_llava_python_module_assistance_google/
false
false
self
3
null
⏳ Worked on a time-bound fine-tuning mini-project over the weekend and it was annoyingly fun!
9
🏗️ **Task at hand** Fine-tune a CodeLlama model from Hugging Face Hub while incorporating the [Flash Attention](https://github.com/Dao-AILab/flash-attention) technique and digging deeper into fundamentals of GPU architecture as much as possible. 💸 **Constraints** Do not use GPU instances from one of the big 3 cloud providers (AWS, GCP, Azure) and budget should stay as close to $0 as possible ($10 is a stretch). 🤦 **Missteps** Turns out the most annoying and revealing part was finding the right GPU/provider combination. Flash Attention only supports certain GPU architectures - Ampere (eg. A100, A4000, RTX 3090), Ada (eg. RTX 4090) or Hopper. First provider I got optimistic about offered RTX 4090/3090, but I ran into issues with machine setup and CUDA configuration. 🎯 **What eventually worked?** [DigitalOcean Paperspace](https://www.digitalocean.com/products/paperspace)'s A4000 GPU instance to the rescue! Their setup is user-friendly, and they even offered starter credits (a budget saver!). This wasn't my first time using Paperspace thanks to the FastAI course from back in the day. It's changed a lot since then though. 📌 **Takeaways** When working on such a task with time constraints, try to use abstraction layers if possible but it can still be helpful to know about the basics of underlying hardware. In addition, market research is a handy skill to have even as an engineer. This one is deeply ingrained in me thanks to my time at [Looppanel](https://www.looppanel.com/) (co-founder) and multiple startups. 😂 ❓**Question** What does your fine-tuning stack look like? I am particularly interested if it doesn't involve one of the big 3 cloud providers!
2024-01-30T07:12:20
https://www.reddit.com/r/LocalLLaMA/comments/1aeikix/worked_on_a_timebound_finetuning_miniproject_over/
analyticalmonk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aeikix
false
null
t3_1aeikix
/r/LocalLLaMA/comments/1aeikix/worked_on_a_timebound_finetuning_miniproject_over/
false
false
self
9
{'enabled': False, 'images': [{'id': 'wr8FIUP9vW6uFAH00pFY_INaPBOAwt-Dx66NR8JN9U4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Exyy_D339_5mo3t200rsljBHMp2oozK-gX7lYduR9_0.jpg?width=108&crop=smart&auto=webp&s=93f0cb8ec2d44c6526bf48faf3a25ec85611ba63', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Exyy_D339_5mo3t200rsljBHMp2oozK-gX7lYduR9_0.jpg?width=216&crop=smart&auto=webp&s=2b35ddbb529d3e9e747273dd69c68729c1e59ec1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Exyy_D339_5mo3t200rsljBHMp2oozK-gX7lYduR9_0.jpg?width=320&crop=smart&auto=webp&s=34412b94298c66a2b601725efd363438dc7a261d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Exyy_D339_5mo3t200rsljBHMp2oozK-gX7lYduR9_0.jpg?width=640&crop=smart&auto=webp&s=a4a0fe5097d3dbfcdea1b3c5b958c676f0126423', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Exyy_D339_5mo3t200rsljBHMp2oozK-gX7lYduR9_0.jpg?width=960&crop=smart&auto=webp&s=057c7fbd1c52638270565e5891e402991a0181fa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Exyy_D339_5mo3t200rsljBHMp2oozK-gX7lYduR9_0.jpg?width=1080&crop=smart&auto=webp&s=52afc362cc8f298d492571a0e6c73718c93b5ee4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Exyy_D339_5mo3t200rsljBHMp2oozK-gX7lYduR9_0.jpg?auto=webp&s=8fd91a2711dbd8a244aebf8cf0d2b0e1e22b7d8a', 'width': 1200}, 'variants': {}}]}
Is it theoretically possible to use llm for witty replies in comments from unfriendly people?
2
but not like: your answer is unethical and may offend someone bla bla bla . I need a bot that would answer wittier than a human. maybe we need an uncensored model?
2024-01-30T07:10:47
https://www.reddit.com/r/LocalLLaMA/comments/1aeijpb/is_it_theoretically_possible_to_use_llm_for_witty/
professor-studio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aeijpb
false
null
t3_1aeijpb
/r/LocalLLaMA/comments/1aeijpb/is_it_theoretically_possible_to_use_llm_for_witty/
false
false
self
2
null
Any way to run Goliath 120b on 24GB VRAM?
1
[removed]
2024-01-30T07:00:27
https://www.reddit.com/r/LocalLLaMA/comments/1aeieah/any_way_to_run_goliath_120b_on_24gb_vram/
yupignome
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aeieah
false
null
t3_1aeieah
/r/LocalLLaMA/comments/1aeieah/any_way_to_run_goliath_120b_on_24gb_vram/
false
false
self
1
null
Creating a teams chatbot with a local LLM?
5
as the title says, does anyone know how one can do this? Or can give me pointers for directions?
2024-01-30T06:43:04
https://www.reddit.com/r/LocalLLaMA/comments/1aei4wb/creating_a_teams_chatbot_with_a_local_llm/
Ornery_Meat1055
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aei4wb
false
null
t3_1aei4wb
/r/LocalLLaMA/comments/1aei4wb/creating_a_teams_chatbot_with_a_local_llm/
false
false
self
5
null
out of memory issue while inferencing
1
hi all, been stuck on an issue . I have mistral 7b on local and using the same for inferencing over a list of tables. have a A100 80gb gpu . i am using pipeline wrapper from hugging face for text generation and using it for predictions. while looping over i have been getting Cuda out of memory after 30 iterations. how to clear the forward pass memory as its stacking up. ps- tried gc collect and empty cache torch
2024-01-30T06:26:35
https://www.reddit.com/r/LocalLLaMA/comments/1aehvhl/out_of_memory_issue_while_inferencing/
happy_dreamer10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aehvhl
false
null
t3_1aehvhl
/r/LocalLLaMA/comments/1aehvhl/out_of_memory_issue_while_inferencing/
false
false
self
1
null
ELI5 What's the difference between a Chat LLM and an Instruct LLM?
11
I can't, for example, enter the Together playground and use the new code llama 70 because I don't know how to ask it to do something for me. It's probably because I don't know the difference between chat and instruct, please explain it to me.
2024-01-30T05:30:12
https://www.reddit.com/r/LocalLLaMA/comments/1aegy3v/eli5_whats_the_difference_between_a_chat_llm_and/
cvjcvj2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aegy3v
false
null
t3_1aegy3v
/r/LocalLLaMA/comments/1aegy3v/eli5_whats_the_difference_between_a_chat_llm_and/
false
false
self
11
null
Azure VM for Llama 2 70b
1
Anyone using Azure and deployed LLama 2 70B? I'm trying to come up with a solution. What was your token/second throughput?
2024-01-30T05:14:20
https://www.reddit.com/r/LocalLLaMA/comments/1aegobf/azure_vm_for_llama_2_70b/
xyz_TrashMan_zyx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aegobf
false
null
t3_1aegobf
/r/LocalLLaMA/comments/1aegobf/azure_vm_for_llama_2_70b/
false
false
self
1
null
Upgrading GTX 1060 6GB to RTX 3070 Ti 8GB is good enough for 13B models?
4
Right now I have GTX 1060 6GB with 5900X CPU + 64GB memory for 7B models only. My son is throwing away his 3070 Ti 8GB for upgrade. Considering small memory upgrade of 6GB vs 8GB, will it be worth to get it? If it won't make big improvement, he will sell it to gamer instead of throwing it to me :-)
2024-01-30T04:27:28
https://www.reddit.com/r/LocalLLaMA/comments/1aeftyu/upgrading_gtx_1060_6gb_to_rtx_3070_ti_8gb_is_good/
user0user
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aeftyu
false
null
t3_1aeftyu
/r/LocalLLaMA/comments/1aeftyu/upgrading_gtx_1060_6gb_to_rtx_3070_ti_8gb_is_good/
false
false
self
4
null
TransformerTransformer Math 101
1
[https://blog.eleuther.ai/transformer-math/](https://blog.eleuther.ai/transformer-math/) Very useful baseline for thinking about LLM.
2024-01-30T04:06:46
https://www.reddit.com/r/LocalLLaMA/comments/1aefgb3/transformertransformer_math_101/
lukaemon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aefgb3
false
null
t3_1aefgb3
/r/LocalLLaMA/comments/1aefgb3/transformertransformer_math_101/
false
false
self
1
{'enabled': False, 'images': [{'id': '7Ut-EDXSqI4_e9ERFOmS1DuQvIFP6EumkXTvsa_T9ps', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/axS_7OiIPetA15ERwsj56US38YN3udvQbRWWL11-P1s.jpg?width=108&crop=smart&auto=webp&s=e31ac4f273c36c67de4210a2c06e328da13bcb74', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/axS_7OiIPetA15ERwsj56US38YN3udvQbRWWL11-P1s.jpg?width=216&crop=smart&auto=webp&s=214d96ec63c1099274bac96327e39a5dd55f5c4f', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/axS_7OiIPetA15ERwsj56US38YN3udvQbRWWL11-P1s.jpg?width=320&crop=smart&auto=webp&s=1a6924ba20e54df4b639273aeb3cf12233dc80f6', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/axS_7OiIPetA15ERwsj56US38YN3udvQbRWWL11-P1s.jpg?width=640&crop=smart&auto=webp&s=37239a52847ee47a07507fb3854fc425eb79a2ef', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/axS_7OiIPetA15ERwsj56US38YN3udvQbRWWL11-P1s.jpg?width=960&crop=smart&auto=webp&s=261b05657c3589586b8b6fda08b89003b8af8848', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/axS_7OiIPetA15ERwsj56US38YN3udvQbRWWL11-P1s.jpg?auto=webp&s=39418615dbb5d636bf47dda84851b73c48d7e454', 'width': 1024}, 'variants': {}}]}
Install Ollama under Win11 & WSL - CUDA Installation guide
5
I had issues when I was trying installing Ollama under Win11 WSL. In short: - truncated libcudnn - conflicting Libraries - CUDA sample directory was not foud Anyways, all issues were CUDA related, so I made short guide for installing CUDA under wsl. After properly installing CUDA, I didn't have any issues with Ollama installation. So, if any other newbie like me encounter similar issues, this approach worked for me. https://gist.github.com/nekiee13/c8ec43bce5fd75d20e38b31a613fd83d
2024-01-30T04:03:37
https://www.reddit.com/r/LocalLLaMA/comments/1aefeb5/install_ollama_under_win11_wsl_cuda_installation/
dodo13333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aefeb5
false
null
t3_1aefeb5
/r/LocalLLaMA/comments/1aefeb5/install_ollama_under_win11_wsl_cuda_installation/
false
false
self
5
{'enabled': False, 'images': [{'id': 'OAXSl8SY6T3JK9MGQyKxkoYbqZ71HQRYXLeB8CV0NXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=108&crop=smart&auto=webp&s=9bcab7b79864ff27bf48116cb335a6f825bfb124', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=216&crop=smart&auto=webp&s=e4e925345605c644eebe8abd69916915fc4fbcf7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=320&crop=smart&auto=webp&s=614b06d5b40c890a59e355191a6e2d75cdf50789', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=640&crop=smart&auto=webp&s=62ca4cb88917f17e7200a6f1c665b5d959713745', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=960&crop=smart&auto=webp&s=c5f4a30974a8e6bad0d617a79935bc70c954e3e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?width=1080&crop=smart&auto=webp&s=476793be11eaac4604b6b0c938b45c7c3b52d450', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/DaucjXMGsNHM-CtmdilC9-Be6MC8V2z4ykjVCgOkTFc.jpg?auto=webp&s=9ae035fbdcd6bb503ab0b4a605b8db6de46647ee', 'width': 1280}, 'variants': {}}]}
"miqu" Solving The Greatest Problems in Open-Source LLM History
144
Jokes aside, this definitely isn't a weird merge or fluke. This really could be the Mistral Medium leak. It is smarter than GPT-3.5 for sure. Q4 is way too slow for a single rtx 3090 though.
2024-01-30T03:05:08
https://i.redd.it/qv9lybb5uhfc1.png
xadiant
i.redd.it
1970-01-01T00:00:00
0
{}
1aee8m5
false
null
t3_1aee8m5
/r/LocalLLaMA/comments/1aee8m5/miqu_solving_the_greatest_problems_in_opensource/
false
false
https://b.thumbs.redditm…SsTIRjJj7uMY.jpg
144
{'enabled': True, 'images': [{'id': 'z1WejsawOtxFLVbrKaE2HMo8Y68uE-D6KlAa2wf6ZVQ', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/qv9lybb5uhfc1.png?width=108&crop=smart&auto=webp&s=3215f33fef49ba47064e9cf139757a54127c4a50', 'width': 108}, {'height': 165, 'url': 'https://preview.redd.it/qv9lybb5uhfc1.png?width=216&crop=smart&auto=webp&s=66b30f8533b3eb4efd036271c8f00a720fee299e', 'width': 216}, {'height': 245, 'url': 'https://preview.redd.it/qv9lybb5uhfc1.png?width=320&crop=smart&auto=webp&s=5005afe5c73cfcca116d24b8cd8bfc736ffe85fd', 'width': 320}, {'height': 490, 'url': 'https://preview.redd.it/qv9lybb5uhfc1.png?width=640&crop=smart&auto=webp&s=01a4f35dd39d64c5bc77432d1e7a70542cd4d2be', 'width': 640}, {'height': 735, 'url': 'https://preview.redd.it/qv9lybb5uhfc1.png?width=960&crop=smart&auto=webp&s=1759d04c1de6f5796b1e41c4d54bfe3e4623d0ad', 'width': 960}, {'height': 827, 'url': 'https://preview.redd.it/qv9lybb5uhfc1.png?width=1080&crop=smart&auto=webp&s=9677bb6a61b3aa22aa74a76c964a4059981d4e5d', 'width': 1080}], 'source': {'height': 945, 'url': 'https://preview.redd.it/qv9lybb5uhfc1.png?auto=webp&s=84797f72d954181e1e3bf8de4316c065c69018a6', 'width': 1233}, 'variants': {}}]}
LiesLLM: I found out that Perplexity's PPLX-"online" model is not actually online. They cache/index the internet and feed to the LLM, so the LLM doesn't actually have browsing/internet-access.
3
I used their API (through openrouter) to ask a simple question: `What is on news.ycombinator.com today?` This is my CURL command: curl https://openrouter.ai/api/v1/chat/completions \ (writersUnblock) 21:05:21 -H "Content-Type: application/json" \ -H "Authorization: Bearer <openrouter-api-key>" \ -d '{ "model": "perplexity/pplx-70b-online", "messages": [ {"role": "user", "content": "What is on news.ycombinator.com today?"} ] }' Here's the response: {"choices":[{"message":{"role":"assistant","content":"Today on news.ycombinator.com, some of the top stories include:\n\n1. Zed, a collaborative code editor, is now open source.\n2. FTC bans TurboTax from advertising 'free' services, calls deceptive advertising.\n3. Boeing whistleblower: MAX 9 production line has \"enormous volume of defects\".\n4. Tell HN: Hacker News now supports IPv6.\n5. Why are we templating YAML? (2019).\n6. Show HN: Startup funding simulator.\n7. Framework Laptop 16 Review.\n8. Waterway Map.\n9. Alaska CEO: We found many loose bolts on our Max planes.\n10. Free Godot engine port for Nintendo Switch.\n\nThese stories cover a variety of topics, including technology, software, business, and more.\n"},"finish_reason":"stop"}],"model":"perplexity/pplx-70b-online","usage":{"prompt_tokens":41,"completion_tokens":231,"total_tokens":272},"id":"gen-H3lnJBOIQFQhrmTReHRTbfAvhJZ8","object":"chat.completion","created":1706580321} Content: >Today on news.ycombinator.com, some of the top stories include:\\n\\n1. Zed, a collaborative code editor, is now open source.\\n2. FTC bans TurboTax from advertising 'free' services, calls deceptive advertising.\\n3. Boeing whistleblower: MAX 9 production line has \\"enormous volume of defects\\".\\n4. Tell HN: Hacker News now supports IPv6.\\n5. Why are we templating YAML? (2019).\\n6. Show HN: Startup funding simulator.\\n7. Framework Laptop 16 Review.\\n8. Waterway Map.\\n9. Alaska CEO: We found many loose bolts on our Max planes.\\n10. Free Godot engine port for Nintendo Switch.\\n\\nThese stories cover a variety of topics, including technology, software, business, and more.\\n > >This was a few days ago. Currently, HN shows this: &#x200B; https://preview.redd.it/3dlvb9hvkhfc1.png?width=1664&format=png&auto=webp&s=06080462af94ffa7f17d040714d500e0386f16da
2024-01-30T02:13:44
https://www.reddit.com/r/LocalLLaMA/comments/1aed670/liesllm_i_found_out_that_perplexitys_pplxonline/
nderstand2grow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aed670
false
null
t3_1aed670
/r/LocalLLaMA/comments/1aed670/liesllm_i_found_out_that_perplexitys_pplxonline/
false
false
https://b.thumbs.redditm…SILBjzHW1HeE.jpg
3
null
I just downloaded codellama-70b-instruct, I want to run it on localLLama, what am I supposed to do with these consolidated.0*.pth files?
2
I just downloaded codellama-70b-instruct. I want to run it on LM Studio. when the download has finished it am left with a folder like this [Files after download has completed](https://preview.redd.it/j04592iwghfc1.jpg?width=535&format=pjpg&auto=webp&s=2cacf0347c964423cb1dac8657dba198e213024f) What am I supposed to do with these consolidated.0\*.pth files? Does LocalLLaMA know how to use these? If not, is there a utility to convert these into something it does know how to use?
2024-01-30T01:53:24
https://www.reddit.com/r/LocalLLaMA/comments/1aecqr9/i_just_downloaded_codellama70binstruct_i_want_to/
arraytrix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aecqr9
false
null
t3_1aecqr9
/r/LocalLLaMA/comments/1aecqr9/i_just_downloaded_codellama70binstruct_i_want_to/
false
false
https://b.thumbs.redditm…htR2eR7PevpM.jpg
2
null
The Poetroid Camera
309
Idk where to post this exactly but this sub helped the most so I'm posting it here and maybe soke here will appreciate. This is the poetroid camera. It takes and prints poems on a thermal receipt printer. The dial allows you to set it to different poets (shown on lcd on the back). It runs locally using ollama with the llava model + other models depending on the prompt. It has modes besides poetry like plant identification and mostly other wacky kind of settings like roasting people. It can work offline (very slowly) or with my local self-hosted server. Hope you enjoy and thanks for the great community.
2024-01-30T01:39:03
https://www.reddit.com/gallery/1aecfu1
gthing
reddit.com
1970-01-01T00:00:00
0
{}
1aecfu1
false
null
t3_1aecfu1
/r/LocalLLaMA/comments/1aecfu1/the_poetroid_camera/
false
false
https://b.thumbs.redditm…X4GjXZZZqLBE.jpg
309
null
LLM Developer Contest: NVIDIA GenAI on RTX for Windows PCs.
42
I'm part of the the developer team at NVIDIA. We have noticed that there has been an awesome influx of LLM Windows development on RTX GPUs, and we want to reward some of the best. If you have an idea for a generative AI project - running locally on Windows PC - enter our [#GenAI on RTX PCs developer contest](https://www.nvidia.com/en-us/ai-data-science/generative-ai/rtx-developer-contest/) \- and you could win a GeForce RTX 4090 GPU, a full GTC in-person conference pass, a Deep Learning Institute training, and more. Questions on how to get started? Our [u/NV\_DevZone](https://www.reddit.com/u/NV_DevZone/) team will answer questions here and/or see [our getting started guide](https://developer.nvidia.com/blog/get-started-with-generative-ai-development-for-windows-pcs-with-rtx-systems/). The contest runs now through Feb. 23, 2024. https://preview.redd.it/hv441204ahfc1.png?width=1200&format=png&auto=webp&s=eb6cd4fcca53c9eccf8faa4b441bd65428bd3dcb
2024-01-30T01:31:34
https://www.reddit.com/r/LocalLLaMA/comments/1aeca9q/llm_developer_contest_nvidia_genai_on_rtx_for/
PDXcoder2000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aeca9q
false
null
t3_1aeca9q
/r/LocalLLaMA/comments/1aeca9q/llm_developer_contest_nvidia_genai_on_rtx_for/
false
false
https://b.thumbs.redditm…YY8C4ry8K13U.jpg
42
{'enabled': False, 'images': [{'id': 'Vx-JpjCmSX6ISu-7M1TXmyvfw99joz8RMQfamizoLAw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/6PxNlVj1k9BLyXPwukfWEquneKlKht-9rPFa132IPs8.jpg?width=108&crop=smart&auto=webp&s=48dbde024b96774fc0d4b514722691dd8f0c9788', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/6PxNlVj1k9BLyXPwukfWEquneKlKht-9rPFa132IPs8.jpg?width=216&crop=smart&auto=webp&s=d669859f89dc4020e209d1d96eaacc17055fcf78', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/6PxNlVj1k9BLyXPwukfWEquneKlKht-9rPFa132IPs8.jpg?width=320&crop=smart&auto=webp&s=77245dfb6a0d555a4dda17a2914bd30d6d49ba81', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/6PxNlVj1k9BLyXPwukfWEquneKlKht-9rPFa132IPs8.jpg?width=640&crop=smart&auto=webp&s=d719df8c8152883e6c605f656fa3a76f6b5ab5e9', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/6PxNlVj1k9BLyXPwukfWEquneKlKht-9rPFa132IPs8.jpg?width=960&crop=smart&auto=webp&s=323d32f046ccd39f1b9c95d3c5de3268eddd3477', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/6PxNlVj1k9BLyXPwukfWEquneKlKht-9rPFa132IPs8.jpg?width=1080&crop=smart&auto=webp&s=520cdc15ce4fb8a68bd27a3292a1f5864db969a2', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/6PxNlVj1k9BLyXPwukfWEquneKlKht-9rPFa132IPs8.jpg?auto=webp&s=1773cb9f160c4e3aaeb06a0433828d2979ad5f3f', 'width': 1200}, 'variants': {}}]}
CringeBot - imbue the essence of the child you were in 2002 into an LLM!
1
[removed]
2024-01-30T01:02:27
https://www.reddit.com/r/LocalLLaMA/comments/1aebnrv/cringebot_imbue_the_essence_of_the_child_you_were/
hotspoons_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aebnrv
false
null
t3_1aebnrv
/r/LocalLLaMA/comments/1aebnrv/cringebot_imbue_the_essence_of_the_child_you_were/
false
false
self
1
{'enabled': False, 'images': [{'id': '532wVwhlDOr-xMBOubtfzwN5z2Qcc8UHXUlYxSUyKq8', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/MRiGIET0RTrCT6Ds69uUpoEUHiuJravTcrZ5VZRT4QM.jpg?width=108&crop=smart&auto=webp&s=6ac53ffdcb5c0d2c78805642aa59666e75d2ba92', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/MRiGIET0RTrCT6Ds69uUpoEUHiuJravTcrZ5VZRT4QM.jpg?width=216&crop=smart&auto=webp&s=5f7e94a7bd74ea7bba313dd4702589e021a462a8', 'width': 216}, {'height': 212, 'url': 'https://external-preview.redd.it/MRiGIET0RTrCT6Ds69uUpoEUHiuJravTcrZ5VZRT4QM.jpg?width=320&crop=smart&auto=webp&s=350149cad17629e4d9139f91dfc1152e31e07541', 'width': 320}], 'source': {'height': 415, 'url': 'https://external-preview.redd.it/MRiGIET0RTrCT6Ds69uUpoEUHiuJravTcrZ5VZRT4QM.jpg?auto=webp&s=18cc6e028c9f0bf432a881a67f0480a5c079aba1', 'width': 625}, 'variants': {}}]}
Cheapest possible rig for 70b?
1
I wanted to know the cheapest way to run 70b at 2-3 t/s. I was considering a server with p40s, but I saw that a k80 would be much cheaper for the vram and was wondering what inference speeds you get with a k80.
2024-01-30T00:57:28
https://www.reddit.com/r/LocalLLaMA/comments/1aebjtz/cheapest_possible_rig_for_70b/
oof-baroomf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aebjtz
false
null
t3_1aebjtz
/r/LocalLLaMA/comments/1aebjtz/cheapest_possible_rig_for_70b/
false
false
self
1
null
Meta - Other than RP what are you folks doing?
18
Guess I'm looking for an excuse to spend money on hardware. There was a thread with the same question two months ago and I can't imagine the answers are still the same?
2024-01-30T00:20:20
https://www.reddit.com/r/LocalLLaMA/comments/1aearox/meta_other_than_rp_what_are_you_folks_doing/
cleuseau
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1aearox
false
null
t3_1aearox
/r/LocalLLaMA/comments/1aearox/meta_other_than_rp_what_are_you_folks_doing/
false
false
self
18
null
Enchanted - Ollama iOS app for self hosted models
21
2024-01-30T00:01:40
https://v.redd.it/1qmmk6f1xgfc1
1amrocket
/r/LocalLLaMA/comments/1aead1h/enchanted_ollama_ios_app_for_self_hosted_models/
1970-01-01T00:00:00
0
{}
1aead1h
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/1qmmk6f1xgfc1/DASHPlaylist.mpd?a=1709294509%2CMzg2NmJjN2JjMDcwZjFhYThiNmVmMDQ4MDgzZDAyNjQyYzQxZGQ5YTc5ZGMyZTc4MTNjNmM5ODI0ZjM2Y2E2ZA%3D%3D&v=1&f=sd', 'duration': 54, 'fallback_url': 'https://v.redd.it/1qmmk6f1xgfc1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/1qmmk6f1xgfc1/HLSPlaylist.m3u8?a=1709294509%2CZDhkM2NiOWE4YWRiMDIzMWM3M2NlYmRhOGFjNzUyYmFmYmFlYzFhNzNlYzVkY2RlMjNlMDJjY2I2MzMwMDRjZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/1qmmk6f1xgfc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1aead1h
/r/LocalLLaMA/comments/1aead1h/enchanted_ollama_ios_app_for_self_hosted_models/
false
false
https://external-preview…3e269996ae107186
21
{'enabled': False, 'images': [{'id': 'dnNzdXpnemV4Z2ZjMeQI7Xd-Yk0Lrv8xS7uFtjh3-C8SOpwW9XzzmdgN6Mno', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dnNzdXpnemV4Z2ZjMeQI7Xd-Yk0Lrv8xS7uFtjh3-C8SOpwW9XzzmdgN6Mno.png?width=108&crop=smart&format=pjpg&auto=webp&s=ecd70a352b17dcae619b1767a94103297ed06e69', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dnNzdXpnemV4Z2ZjMeQI7Xd-Yk0Lrv8xS7uFtjh3-C8SOpwW9XzzmdgN6Mno.png?width=216&crop=smart&format=pjpg&auto=webp&s=85429e9e4bf41c005305747b13a9c00977598636', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dnNzdXpnemV4Z2ZjMeQI7Xd-Yk0Lrv8xS7uFtjh3-C8SOpwW9XzzmdgN6Mno.png?width=320&crop=smart&format=pjpg&auto=webp&s=fdf00c1f8cbf0f45720bc1341463888ccd0b8672', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dnNzdXpnemV4Z2ZjMeQI7Xd-Yk0Lrv8xS7uFtjh3-C8SOpwW9XzzmdgN6Mno.png?width=640&crop=smart&format=pjpg&auto=webp&s=06c16b49814b7cce8ff926a05e3c3887bba8317d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dnNzdXpnemV4Z2ZjMeQI7Xd-Yk0Lrv8xS7uFtjh3-C8SOpwW9XzzmdgN6Mno.png?width=960&crop=smart&format=pjpg&auto=webp&s=897dcd1dd9b011f3f60cc4cb330c57a18b3db241', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dnNzdXpnemV4Z2ZjMeQI7Xd-Yk0Lrv8xS7uFtjh3-C8SOpwW9XzzmdgN6Mno.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c93a81042a8dd8f979fe17d31d34611ebc9b3c2e', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dnNzdXpnemV4Z2ZjMeQI7Xd-Yk0Lrv8xS7uFtjh3-C8SOpwW9XzzmdgN6Mno.png?format=pjpg&auto=webp&s=bcc806d2585655233853239050be5b1a28692906', 'width': 1920}, 'variants': {}}]}
What is considered the best uncensored LLM right now?
1
Looking for any model that I can run with 12gb of vram and or 32gb or RAM.
2024-01-29T23:24:58
https://www.reddit.com/r/LocalLLaMA/comments/1ae9j54/what_is_considered_the_best_uncensored_llm_right/
Acarvi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae9j54
false
null
t3_1ae9j54
/r/LocalLLaMA/comments/1ae9j54/what_is_considered_the_best_uncensored_llm_right/
false
false
self
1
null
Optimizing ollama for Mixtral 8X7B on MacBook Pro M1 Pro
1
Hello /r/LocalLLaMA I am looking for some guidance on how to best configure ollama to run Mixtral 8X7B on my Macbook Pro M1 Pro 32GB. I am able to run dolphin-2.5-mixtral-8x7b.Q4\_K\_M in LM Studio with the model loaded into memory if I increase the wired memory limit on my Macbook to 30GB. However, when I try use mixtral through the langchain ollama client for node, ollama defaults to cpu inference and it takes so long thinking about the prompt that it's unusable. In LM studio my inference speed is about 18 tok/s. I have tested with dolphin-2.5-mixtral Q4\_K\_M from TheBloke's huggingface, and mixtral-8x7b-text-v0.1-q4\_K\_M from the ollama library. Just now I tried to run each of those models using the ollama CLI to make sure it's not my app (it's been working fine with both Mistral 7B locally and the OpenAI client), and both models also are not using GPU inference when run from the terminal. I said "Hello" and they both hallucinated responses, one in Korean about the vote to ban sexual alliance, and the other a user query about how to package fonts for adobe illustrator. I know ollama is supposed to be optimized for Apple Silicon out of the box, but is there any configuration I can do to force it to load the model into memory? Is there a better integration I could be using for langchain? I'm only using ollama because that's what was in the docs.
2024-01-29T23:20:28
https://www.reddit.com/r/LocalLLaMA/comments/1ae9fju/optimizing_ollama_for_mixtral_8x7b_on_macbook_pro/
Living_Tone3782
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae9fju
false
null
t3_1ae9fju
/r/LocalLLaMA/comments/1ae9fju/optimizing_ollama_for_mixtral_8x7b_on_macbook_pro/
false
false
self
1
null
Any reason why Ollama doesn't have the Whisper model?
2
Would love to have one unified interface for local models, and Ollama seems to have everything I need except Whisper. Now I can run it separately, but then I need to have a seperate API, etc. - Is there any reason or limitation for not having Whisper in Ollama?
2024-01-29T23:04:21
https://www.reddit.com/r/LocalLLaMA/comments/1ae92e6/any_reason_why_ollama_doesnt_have_the_whisper/
No-Camel-3819
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae92e6
false
null
t3_1ae92e6
/r/LocalLLaMA/comments/1ae92e6/any_reason_why_ollama_doesnt_have_the_whisper/
false
false
self
2
null
chatglm3-6b-32k
11
I've got a question about THUDM/chatglm3-6b-32k ([link](https://huggingface.co/THUDM/chatglm3-6b-32k)), which I first enountered in a model review by u/ramprasad27 in a post [here.](https://www.reddit.com/r/LocalLLaMA/comments/18s61fb/pressuretested_the_most_popular_opensource_llms/) The reviewer suggested it has one of the best recall abilities at this context length. A while back, I attempted to get it up and running but eventually set the project (it started to feel like a project!) aside. I'm curious to know if anyone in this community uses this model regularly, particularly for summarization tasks. How has your experience been with it? Did you face any challenges in setting it up? How does it perform in terms of accuracy and efficiency?
2024-01-29T22:07:50
https://www.reddit.com/r/LocalLLaMA/comments/1ae7pu1/chatglm36b32k/
Hinged31
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae7pu1
false
null
t3_1ae7pu1
/r/LocalLLaMA/comments/1ae7pu1/chatglm36b32k/
false
false
self
11
{'enabled': False, 'images': [{'id': 'L8GySIppqoYnMje-5KjDv9To7U7iUMM58uwJM_aZhR8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jaZgTd4wptnwaXzv4f_41lQ_i68E5n13slIWkC7FblE.jpg?width=108&crop=smart&auto=webp&s=2eb1ca221e9b8e26e41557c9ddf99ec287a1f5cf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jaZgTd4wptnwaXzv4f_41lQ_i68E5n13slIWkC7FblE.jpg?width=216&crop=smart&auto=webp&s=8bb37ca25852b17158329aa75a34a4c9246d7464', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jaZgTd4wptnwaXzv4f_41lQ_i68E5n13slIWkC7FblE.jpg?width=320&crop=smart&auto=webp&s=5a8bb59e35f552c3264bc464a7d39d82716aa687', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jaZgTd4wptnwaXzv4f_41lQ_i68E5n13slIWkC7FblE.jpg?width=640&crop=smart&auto=webp&s=71dbf925d369274a07489fff77eeece690d6431d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jaZgTd4wptnwaXzv4f_41lQ_i68E5n13slIWkC7FblE.jpg?width=960&crop=smart&auto=webp&s=b2380a529f9fffd650d11c5440283b2943b374e4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jaZgTd4wptnwaXzv4f_41lQ_i68E5n13slIWkC7FblE.jpg?width=1080&crop=smart&auto=webp&s=663d6714b98a2e74ba17bb8f2b5be85aa39451dc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jaZgTd4wptnwaXzv4f_41lQ_i68E5n13slIWkC7FblE.jpg?auto=webp&s=362cb6ae22f24cee39c741e554b08d6946e1ebfd', 'width': 1200}, 'variants': {}}]}
Can I run Mixtral 8 x 7b locally on a maxed out Mac Studio?
2
I know Mixtral 8 X 7b is a pretty big model. Is a maxed out Mac Studio powerful enough to run it locally?
2024-01-29T22:05:56
https://www.reddit.com/r/LocalLLaMA/comments/1ae7o58/can_i_run_mixtral_8_x_7b_locally_on_a_maxed_out/
M3629
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae7o58
false
null
t3_1ae7o58
/r/LocalLLaMA/comments/1ae7o58/can_i_run_mixtral_8_x_7b_locally_on_a_maxed_out/
false
false
self
2
null
What is the best way to learn the fundamentals of LLMs as a software engineer?
1
[removed]
2024-01-29T21:59:11
https://www.reddit.com/r/LocalLLaMA/comments/1ae7i0e/what_is_the_best_way_to_learn_the_fundamentals_of/
lightdreamscape
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae7i0e
false
null
t3_1ae7i0e
/r/LocalLLaMA/comments/1ae7i0e/what_is_the_best_way_to_learn_the_fundamentals_of/
false
false
self
1
{'enabled': False, 'images': [{'id': '-_6t2RVCfsHr3NWG0MrnoyFXnKNDoM0GkIEMMUKe5TA', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/Xr6THSH83Pz8pUTGNOixQ9UMulEahngSYpAyqPfaVKE.jpg?width=108&crop=smart&auto=webp&s=110362bbc93d667246ceae163c4dd858ecf7c874', 'width': 108}, {'height': 107, 'url': 'https://external-preview.redd.it/Xr6THSH83Pz8pUTGNOixQ9UMulEahngSYpAyqPfaVKE.jpg?width=216&crop=smart&auto=webp&s=bfdc9f78c411f530d85fce5a98787c2719726c31', 'width': 216}, {'height': 159, 'url': 'https://external-preview.redd.it/Xr6THSH83Pz8pUTGNOixQ9UMulEahngSYpAyqPfaVKE.jpg?width=320&crop=smart&auto=webp&s=27abd3f7fec33580eb5e22b9b08a214b49575f7e', 'width': 320}, {'height': 319, 'url': 'https://external-preview.redd.it/Xr6THSH83Pz8pUTGNOixQ9UMulEahngSYpAyqPfaVKE.jpg?width=640&crop=smart&auto=webp&s=320201fc67d5a8a2f193348a39e919c82628ef64', 'width': 640}, {'height': 479, 'url': 'https://external-preview.redd.it/Xr6THSH83Pz8pUTGNOixQ9UMulEahngSYpAyqPfaVKE.jpg?width=960&crop=smart&auto=webp&s=75d649fbda5135aa5b747d195805b43ba347a0da', 'width': 960}, {'height': 539, 'url': 'https://external-preview.redd.it/Xr6THSH83Pz8pUTGNOixQ9UMulEahngSYpAyqPfaVKE.jpg?width=1080&crop=smart&auto=webp&s=0ff6be34a342bdbb735fc45c1d77049f6ece1855', 'width': 1080}], 'source': {'height': 556, 'url': 'https://external-preview.redd.it/Xr6THSH83Pz8pUTGNOixQ9UMulEahngSYpAyqPfaVKE.jpg?auto=webp&s=d8b39ea078f3441427676bf1cac1fffa281a84af', 'width': 1114}, 'variants': {}}]}
Mixture of Experts LLaVA Demo
35
>With only 3 billion selectively activated parameters, MoE-LLaVA performs similarly to LLaVA1.5-7B on visual understanding datasets and outperforms LLaVA1.5-13B in object hallucination benchmarks.
2024-01-29T21:55:49
https://huggingface.co/spaces/LanguageBind/MoE-LLaVA
ninjasaid13
huggingface.co
1970-01-01T00:00:00
0
{}
1ae7f5i
false
null
t3_1ae7f5i
/r/LocalLLaMA/comments/1ae7f5i/mixture_of_experts_llava_demo/
false
false
https://a.thumbs.redditm…utger3yxA_h8.jpg
35
{'enabled': False, 'images': [{'id': 'XEwPceY_kLcvTbUq0yLFtIWfP7YDlP5fqY3i2pxv3Rc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8IxR3DUtWRar7TyjjzmIo7PEZNVXXXI_vSARVqnTdAY.jpg?width=108&crop=smart&auto=webp&s=d0893830485fabe98d1539f54bfa475779416d07', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8IxR3DUtWRar7TyjjzmIo7PEZNVXXXI_vSARVqnTdAY.jpg?width=216&crop=smart&auto=webp&s=5c7a596b95391d21b6d5cd1ed59c4c90db6bde50', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8IxR3DUtWRar7TyjjzmIo7PEZNVXXXI_vSARVqnTdAY.jpg?width=320&crop=smart&auto=webp&s=fd013b9ec9370f723559e60078a591cd65c75060', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8IxR3DUtWRar7TyjjzmIo7PEZNVXXXI_vSARVqnTdAY.jpg?width=640&crop=smart&auto=webp&s=df7e723616ebc3c2547cf6022c0a3d2368c0d85f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8IxR3DUtWRar7TyjjzmIo7PEZNVXXXI_vSARVqnTdAY.jpg?width=960&crop=smart&auto=webp&s=feae4abc56267ac78e9d5bebeddc4a41b18e9124', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8IxR3DUtWRar7TyjjzmIo7PEZNVXXXI_vSARVqnTdAY.jpg?width=1080&crop=smart&auto=webp&s=7d5604310c0fd350f4006d3ce508087bebe8d7ca', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8IxR3DUtWRar7TyjjzmIo7PEZNVXXXI_vSARVqnTdAY.jpg?auto=webp&s=d4d7887e7fc28a70a62755e14cdd540e08887e6c', 'width': 1200}, 'variants': {}}]}
Where are we now?
145
2024-01-29T21:40:14
https://i.redd.it/eaa4s1d58gfc1.png
Wrong_User_Logged
i.redd.it
1970-01-01T00:00:00
0
{}
1ae71r0
false
null
t3_1ae71r0
/r/LocalLLaMA/comments/1ae71r0/where_are_we_now/
false
false
https://b.thumbs.redditm…kPkBmFeR9fiE.jpg
145
{'enabled': True, 'images': [{'id': 'QUIr7aOOLgEokSKvtQREo9pc_20BlCYjwbRLsWgzpTY', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/eaa4s1d58gfc1.png?width=108&crop=smart&auto=webp&s=927732efc1b3d12b3d21909f266be8d96e8c295a', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/eaa4s1d58gfc1.png?width=216&crop=smart&auto=webp&s=74b7afd768331a6ba4b43b500c787e48fb1364b0', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/eaa4s1d58gfc1.png?width=320&crop=smart&auto=webp&s=dfedeeaa0c8063b88cb2219677c5b17079d39837', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/eaa4s1d58gfc1.png?width=640&crop=smart&auto=webp&s=95b5077d754bba93f074d57ce06bd392f0bd619e', 'width': 640}, {'height': 639, 'url': 'https://preview.redd.it/eaa4s1d58gfc1.png?width=960&crop=smart&auto=webp&s=6d01c77f8fa7e66ab5b691f9a00f286374c34c9a', 'width': 960}], 'source': {'height': 682, 'url': 'https://preview.redd.it/eaa4s1d58gfc1.png?auto=webp&s=795b277b2f03f16833acaaaca4773721c06d900f', 'width': 1024}, 'variants': {}}]}
Is AI overhyped?
1
[deleted]
2024-01-29T21:39:17
https://i.redd.it/llj6fgnz7gfc1.png
Wrong_User_Logged
i.redd.it
1970-01-01T00:00:00
0
{}
1ae70we
false
null
t3_1ae70we
/r/LocalLLaMA/comments/1ae70we/is_ai_overhyped/
false
false
https://b.thumbs.redditm…ImBX9tu_lhLc.jpg
1
{'enabled': True, 'images': [{'id': 'cQzos8UTjcKQJvCWo2MJ8QX7YxKPu7GR13BLJ0M-LRw', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/llj6fgnz7gfc1.png?width=108&crop=smart&auto=webp&s=882015d5f61949505ff3cdeb6433861fe73cbebd', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/llj6fgnz7gfc1.png?width=216&crop=smart&auto=webp&s=818e8fd1ce0b8d78d666f95938972940693b33b1', 'width': 216}, {'height': 213, 'url': 'https://preview.redd.it/llj6fgnz7gfc1.png?width=320&crop=smart&auto=webp&s=2a2a30c0cf5f3ca86983886f088f1b1481d27a0d', 'width': 320}, {'height': 426, 'url': 'https://preview.redd.it/llj6fgnz7gfc1.png?width=640&crop=smart&auto=webp&s=3abbc5f2ff67251cca65593c680af0cbf9a78d11', 'width': 640}, {'height': 639, 'url': 'https://preview.redd.it/llj6fgnz7gfc1.png?width=960&crop=smart&auto=webp&s=2c3b5dcfdb9ac00567fe954a9bd67fb6337f5ad5', 'width': 960}], 'source': {'height': 682, 'url': 'https://preview.redd.it/llj6fgnz7gfc1.png?auto=webp&s=31e6e9f50a56ce11d099e6489709a09ffd9794b5', 'width': 1024}, 'variants': {}}]}
Does anyone happen to still have CarbonVillain 13B? 🤔
11
Does anyone still have a copy of CarbonVillain 13B? I've looked and looked before posting here but I can't find it anywhere. It's been deleted and seemingly entirely replaced with the 10.7B version, but I specifically want the 13B version that used to be found here: [https://huggingface.co/jeonsworld/CarbonVillain-13B-v1](https://huggingface.co/jeonsworld/CarbonVillain-13B-v1) I hate how this happens to good models all the time. 🤦‍♂️
2024-01-29T21:31:35
https://www.reddit.com/r/LocalLLaMA/comments/1ae6twn/does_anyone_happen_to_still_have_carbonvillain_13b/
Uncle_Warlock
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae6twn
false
null
t3_1ae6twn
/r/LocalLLaMA/comments/1ae6twn/does_anyone_happen_to_still_have_carbonvillain_13b/
false
false
self
11
null
Extending Context 100x w/ Activation Beacon
1
[removed]
2024-01-29T21:05:45
https://arxiv.org/pdf/2401.03462.pdf
LoadingALIAS
arxiv.org
1970-01-01T00:00:00
0
{}
1ae67av
false
null
t3_1ae67av
/r/LocalLLaMA/comments/1ae67av/extending_context_100x_w_activation_beacon/
false
false
default
1
null
AI vs Humans: The Countdown Begins! 🤖🏁
2
Just stumbled upon this fascinating leaderboard over at [Hugging Face](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) (screenshot attached), and guess what? AI is hot on our heels! 😲 Looking at the progress, my back-of-the-napkin calculation says AIs might outpace us humans around May 2024. That's like, super soon! 🗓️ So, what do you think ?
2024-01-29T21:05:32
https://www.reddit.com/r/LocalLLaMA/comments/1ae673l/ai_vs_humans_the_countdown_begins/
fanff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae673l
false
null
t3_1ae673l
/r/LocalLLaMA/comments/1ae673l/ai_vs_humans_the_countdown_begins/
false
false
self
2
{'enabled': False, 'images': [{'id': 'tjJ87qKJWLzmj-5WYTPdt0F_x4nG05KYznkxHl11ljc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dEYg7tYorcKKjwTqLhf9IZA_JuhsDh-r7DtkAObAWCg.jpg?width=108&crop=smart&auto=webp&s=05aecc1a2ea320ce14e6f61f1cf8105636f82f58', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dEYg7tYorcKKjwTqLhf9IZA_JuhsDh-r7DtkAObAWCg.jpg?width=216&crop=smart&auto=webp&s=471a63fb5a13c854b47f3236702d5f7a84960301', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dEYg7tYorcKKjwTqLhf9IZA_JuhsDh-r7DtkAObAWCg.jpg?width=320&crop=smart&auto=webp&s=a4929dff31a71ea9a794ac008554a0adbac92ead', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dEYg7tYorcKKjwTqLhf9IZA_JuhsDh-r7DtkAObAWCg.jpg?width=640&crop=smart&auto=webp&s=a14fe0dcc2a3c3e8eea6fc8846bd96e7783f12e0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dEYg7tYorcKKjwTqLhf9IZA_JuhsDh-r7DtkAObAWCg.jpg?width=960&crop=smart&auto=webp&s=8e6fa8ae19af599bd8db63ee9e9044a622b7a85a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dEYg7tYorcKKjwTqLhf9IZA_JuhsDh-r7DtkAObAWCg.jpg?width=1080&crop=smart&auto=webp&s=d0174dc11978bd2b1010516a10270a102f73a551', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dEYg7tYorcKKjwTqLhf9IZA_JuhsDh-r7DtkAObAWCg.jpg?auto=webp&s=d4bd8bba83b8f1f87e63e13429fb006f53285be5', 'width': 1200}, 'variants': {}}]}
XEON E3-1225 @3.30Ghz strong enough for 70B Llama GGUF?
4
Hello - have a Dell T30 Server and interested in loading it up with 64GB RAM and playing around with local LLMs, not so much training. Is this powerful enough?
2024-01-29T20:48:17
https://www.reddit.com/r/LocalLLaMA/comments/1ae5s10/xeon_e31225_330ghz_strong_enough_for_70b_llama/
GardenGreens12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae5s10
false
null
t3_1ae5s10
/r/LocalLLaMA/comments/1ae5s10/xeon_e31225_330ghz_strong_enough_for_70b_llama/
false
false
self
4
null
RAG for documents with chapters and sub-chapters
1
[removed]
2024-01-29T20:14:29
https://www.reddit.com/r/LocalLLaMA/comments/1ae4ym2/rag_for_documents_with_chapters_and_subchapters/
Electronic-Letter592
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae4ym2
false
null
t3_1ae4ym2
/r/LocalLLaMA/comments/1ae4ym2/rag_for_documents_with_chapters_and_subchapters/
false
false
self
1
null
LLM GPU buyer beware?
5
I’m in the process of researching to upgrade my workstation to run LLMs, and noticed two attractive but seemingly terrible recent entries into the market. The RTX 4060 TI as well as the RX 7600 XT are eye catching as being lower end consumer GPU products with higher VRAM than we normally see (16 GB) However without getting too technical, both of these product are upgrades to dated GPU skus, adding more VRAM but otherwise would perform poorly (at this price point) at alternative use cases such as production or gaming. Other use cases aside, for LLMs it appears these releases are seeking to usurp additional profits from the rise in niche demand for other entry level SKUs people have been gravitating to, such as the 3060 which fundamentally would perform identically speed just with 4GB less VRAM. Turns into a HEAFTY price tag for 4GB VRAM. These two SKUs however DO become more interesting down the road, at a MUCH lower price. My best guess is this is a squeeze to try to profit off card designs that are rapidly falling out of utility as demand for consumer AI capable products is rising and driving innovation. My honest opinion would be to stick with the actual budget hardware for entry level use if you see the need. However it will be interesting to see what happens over the next 12-24 months when we will actually be seeing products in production that are tuned for AI LLM performance at the entry level. Intel, AMD and NVIDIA are all going to be releasing chipsets with capabilities aiming to Apples M series which used CPU/RAM in manner that is ultra efficient for LLM. This will lead to bringing the costs way down to get into an efficient LLM setup and performance is likely to have everyone currently with an LLM rig dump their cards on eBay As a side note I’m not sure how the current GPU market is going to avoid imploding on itself over the next two years as innovation has taken a significant leap. Anyhow, do your homework before buying something that might be a paperweight in 12-25 months :)
2024-01-29T20:06:00
https://www.reddit.com/r/LocalLLaMA/comments/1ae4r1e/llm_gpu_buyer_beware/
coaststl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae4r1e
false
null
t3_1ae4r1e
/r/LocalLLaMA/comments/1ae4r1e/llm_gpu_buyer_beware/
false
false
self
5
null
Thoughts on AMD Strix Halo Chip
5
I recently read to the discussion breaking down how most Apple unified memory architecture is pretty compelling for local models cause it's architecture makes it fall in sweet spot for running the scale of LLM's here, but it reminded me that AMD is planning to launch a proper competitor to the Apple Max series chips towards the end of the year in the form of the Strix Halo, I haven't seen any discussion of it here so I wanted to get the opinion of someone more knowledgeable on its capabilites, of course the main caveat being that it is an AMD chip so no CUDA support.
2024-01-29T20:03:05
https://www.reddit.com/r/LocalLLaMA/comments/1ae4ohz/thoughts_on_amd_strix_halo_chip/
PMARC14
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae4ohz
false
null
t3_1ae4ohz
/r/LocalLLaMA/comments/1ae4ohz/thoughts_on_amd_strix_halo_chip/
false
false
self
5
null
Msty: Offline first, privacy focused companion for parallel chat with Local and Online Models ✨
3
I've created an app called Msty that lets you chat with local models (powered by Ollama) and online models like OpenAI GPTs and Mistral concurrently. What makes it special is its user-friendly approach, designed for all levels of expertise - no need to understand LLM technicalities if you don't want to! This is currently in very early stage and I'm announcing about it publicly for the first time here so I welcome your feedback if you get to try it. I just released a big update today and have planned for some big features for the next couple of months. It's available for MacOS now, with plans to expand to Windows and Linux soon. Check it out at [https://msty.app/](https://msty.app/). (Please note that some screenshots might be outdated as we're releasing updates almost every week :)
2024-01-29T20:01:42
https://www.reddit.com/r/LocalLLaMA/comments/1ae4n7o/msty_offline_first_privacy_focused_companion_for/
askgl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae4n7o
false
null
t3_1ae4n7o
/r/LocalLLaMA/comments/1ae4n7o/msty_offline_first_privacy_focused_companion_for/
false
false
self
3
{'enabled': False, 'images': [{'id': 'UudG83sw_q_Qad_QwTN6uV9imOpgBfS9YVyxeRPcj-g', 'resolutions': [{'height': 77, 'url': 'https://external-preview.redd.it/U-3hpJP92RzVCl2vMoAFgAkJjUtCbRaZNeVDF0qZBF0.jpg?width=108&crop=smart&auto=webp&s=d19f1f3d063b24d78bddc48055e02ea96ae35d9b', 'width': 108}, {'height': 154, 'url': 'https://external-preview.redd.it/U-3hpJP92RzVCl2vMoAFgAkJjUtCbRaZNeVDF0qZBF0.jpg?width=216&crop=smart&auto=webp&s=b4e9ce2ce43fa18f70bce7718ad0acf33a89e78a', 'width': 216}, {'height': 229, 'url': 'https://external-preview.redd.it/U-3hpJP92RzVCl2vMoAFgAkJjUtCbRaZNeVDF0qZBF0.jpg?width=320&crop=smart&auto=webp&s=3b62f524b0c00a474b8860fc6b1a48ec6ef47fc5', 'width': 320}, {'height': 458, 'url': 'https://external-preview.redd.it/U-3hpJP92RzVCl2vMoAFgAkJjUtCbRaZNeVDF0qZBF0.jpg?width=640&crop=smart&auto=webp&s=69ae3c3b8f22c404e687a201fb2f7ec9fe3f5667', 'width': 640}, {'height': 688, 'url': 'https://external-preview.redd.it/U-3hpJP92RzVCl2vMoAFgAkJjUtCbRaZNeVDF0qZBF0.jpg?width=960&crop=smart&auto=webp&s=c7d7f24c6f7e94f824cb8dc10b53f04d8ccc2ee8', 'width': 960}, {'height': 774, 'url': 'https://external-preview.redd.it/U-3hpJP92RzVCl2vMoAFgAkJjUtCbRaZNeVDF0qZBF0.jpg?width=1080&crop=smart&auto=webp&s=833a9d5dfcc3f1c46059e91053672a3819cfc58e', 'width': 1080}], 'source': {'height': 1758, 'url': 'https://external-preview.redd.it/U-3hpJP92RzVCl2vMoAFgAkJjUtCbRaZNeVDF0qZBF0.jpg?auto=webp&s=ebdf1c23f0bf36c2f171848fbd573238ba4db7b9', 'width': 2453}, 'variants': {}}]}
Naked A100
95
Thought would throw up pic captured during build. A100 without any casing. Took me ages to realise the huge gold things are the power distributors/converters. Had presumed they were mem but then realised mem is on the main die with HBM2.
2024-01-29T20:00:42
https://www.reddit.com/gallery/1ae4m73
BreakIt-Boris
reddit.com
1970-01-01T00:00:00
0
{}
1ae4m73
false
null
t3_1ae4m73
/r/LocalLLaMA/comments/1ae4m73/naked_a100/
false
false
https://a.thumbs.redditm…HA8qdbGVnv_0.jpg
95
null
Are 3xNvidia k80 Better than a new mobo with the same amount of ddr5?
6
Just wondering... Refurbished Nvidia k80s are quite cheap around here, and each one of em have 24gb of ddr5. Newer system are ddr5 too and the big pro is that you could swap ram if something goes wrong/If you want to upgrade. So... Are those video cards better anyways? Thank you all in advance
2024-01-29T19:45:36
https://www.reddit.com/r/LocalLLaMA/comments/1ae48xd/are_3xnvidia_k80_better_than_a_new_mobo_with_the/
Relative_Bit_7250
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae48xd
false
null
t3_1ae48xd
/r/LocalLLaMA/comments/1ae48xd/are_3xnvidia_k80_better_than_a_new_mobo_with_the/
false
false
self
6
null
Run llms using xdna drivers/ryzen ai
1
[removed]
2024-01-29T19:40:56
https://www.reddit.com/r/LocalLLaMA/comments/1ae44py/run_llms_using_xdna_driversryzen_ai/
Wonderful-Eye-71
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae44py
false
null
t3_1ae44py
/r/LocalLLaMA/comments/1ae44py/run_llms_using_xdna_driversryzen_ai/
false
false
self
1
null
🌐 Leeroo Orchestration-of-Experts O.O.E "[Research]"
1
[removed]
2024-01-29T19:39:23
https://www.reddit.com/r/LocalLLaMA/comments/1ae43bu/leeroo_orchestrationofexperts_ooe_research/
AALISHKH
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae43bu
false
null
t3_1ae43bu
/r/LocalLLaMA/comments/1ae43bu/leeroo_orchestrationofexperts_ooe_research/
false
false
self
1
null
Replacing ChatGPT 4 - $20 Subscription
6
I use ChatGPT mainly for code generation. I don’t care much for the rest of the features. Currently have: - Kubernetes cluster on 3x Lenovo M710q 32G RAM - ESXI on Threadripper 2970wx / 128G RAM build Assuming a budget of $1500 to buy more hardware. What can I achieve to get similar or better performance to GPT 4.
2024-01-29T19:35:50
https://www.reddit.com/r/LocalLLaMA/comments/1ae40bf/replacing_chatgpt_4_20_subscription/
chkpwd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae40bf
false
null
t3_1ae40bf
/r/LocalLLaMA/comments/1ae40bf/replacing_chatgpt_4_20_subscription/
false
false
self
6
null
We're giving a grant for a port of Nano-GPT to a Haskell-like language called HVM.
9
2024-01-29T19:27:47
https://twitter.com/VictorTaelin/status/1752031528148693373
SrPeixinho
twitter.com
1970-01-01T00:00:00
0
{}
1ae3t64
false
{'oembed': {'author_name': 'Taelin', 'author_url': 'https://twitter.com/VictorTaelin', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">HOC is offering a grant for a purely functional port of Nano-GPT to HVM!<br><br>We&#39;ll be helping with the HVM details, so, if you&#39;re comfortable with Haskell-like languages and GPTs, this could be a great project for you.<br><br>DM me!</p>&mdash; Taelin (@VictorTaelin) <a href="https://twitter.com/VictorTaelin/status/1752031528148693373?ref_src=twsrc%5Etfw">January 29, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/VictorTaelin/status/1752031528148693373', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_1ae3t64
/r/LocalLLaMA/comments/1ae3t64/were_giving_a_grant_for_a_port_of_nanogpt_to_a/
false
false
https://b.thumbs.redditm…a2AytgvLydgg.jpg
9
{'enabled': False, 'images': [{'id': '1MMIJxJiRDSpmwPyOX5ustOqWydAvdN2PKTUcdPhM58', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/v5jbnxSDUijMq82wUK_DTInlzWCApXrb4Hj9gntr_Fc.jpg?width=108&crop=smart&auto=webp&s=6a32310b4a757cfb6105381604142b01ae0a96ff', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/v5jbnxSDUijMq82wUK_DTInlzWCApXrb4Hj9gntr_Fc.jpg?auto=webp&s=604d2849acf8034c40d2d80f9ffe5c0f77cc172e', 'width': 140}, 'variants': {}}]}
Some rumors are claiming this Mistral-Medium got leaked
34
So, some folks say that this model is a leak of Mistrial-Medium. Have we already discussed it, and I oversaw it? Link: https://huggingface.co/miqudev/miqu-1-70b I have downloaded it in q5 format, and in my regular hardware, it looks like too heavy and slow for a 70b, so I guess MoE is being used under the hood. Could someone more technical take a look at the layers structure here?
2024-01-29T19:15:46
https://www.reddit.com/r/LocalLLaMA/comments/1ae3ijf/some_rumors_are_claiming_this_mistralmedium_got/
Shir_man
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae3ijf
false
null
t3_1ae3ijf
/r/LocalLLaMA/comments/1ae3ijf/some_rumors_are_claiming_this_mistralmedium_got/
false
false
self
34
{'enabled': False, 'images': [{'id': 'YwK0i5uwzktX1nH_8wCb9scLo-p_k7U36rHB1ExOeZo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=108&crop=smart&auto=webp&s=dd5d7f81d68648edfb0c8136135c812f616df207', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=216&crop=smart&auto=webp&s=691394ce3210cb799685250dc0f8ae8f4dbc4e9a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=320&crop=smart&auto=webp&s=28e16fa9a638bcc13a3b753d34aebdd5870da93e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=640&crop=smart&auto=webp&s=691b1d20b9d40fc5faf20fcbffb32f46820a4525', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=960&crop=smart&auto=webp&s=7fbbc5bb3550260d003a4468ef960c195d5c3e0f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?width=1080&crop=smart&auto=webp&s=4a08f0b7eb3fd4e39eee342225af416dc9a98ffb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/BQcqnWPMK5Y3U1I8wKe-mQjzHW7J109B4lbTrb7r6sA.jpg?auto=webp&s=5541acffc28b218860ef03141b95aff9b62f33f3', 'width': 1200}, 'variants': {}}]}
Ambitious LLLM to helpe dealt with 'stuff'
3
Hi All, I'm normally a lurker, but decided to throw this one out there as a sanity check, and I'm hoping the experts among you will weigh in. Context: I'm a massive ADHD-head, run a small business, have no sense of 'I can't do that', and have enough vision to see what could happen, but perhaps not enough knowledge to known that I shouldn't be the one to do it, or that it isn't quite possible...yet. I started coding using GPT back in Nov 22 when it came out. Still use it now with Cursor (awesome btw). I get constantly frustrated at the lack of abstraction in a 'useful' sense. By that I mean that there doesn't seem to be much in the way of anything between 'coding' and 'here is a webui that does this one thing'. I want to build a Lego-like system for AI. Visual based, a bit like ComfyUI in the SD world, but far simpler. I want to be able to set a RAG workflow, a Llava screenshot workflow or whatever. Common components with if/then logic. I want to be able to send a document to a LLLM for embedding, then have an agent create a cheat-sheet, then another agent appraise the cheat-sheet and give suggestions, then revisit it in a loop until it's happy. You could only (afford to) do this with LLLM as far as I'm concerned because you're effectively abstracting out each step of the process to a different AI agent. There's elements of things like Autogen, TXTAI, Ollama in there, but the general way they work either abstracts too far, or not enough, so I can only think to build my own. I'm getting frustrated with Gradio (the docs really aren't good enough IMHO), and I can't find or think of any GUI builder that is both not ugly or pure code or so limited it's no use. Yeah, I'm not a great coder. Yeah, I'm sure I'll have lots of issues integrating function calling and Shell commands and as close to realtime STT as I can get, but I need a way to try and get this ball rolling. Am I a loon? Am I too early? A year ago I knew I was, but now? Can someone with some chips weigh in here because I can't be the only one who wants this kind of workflow solution, or am I missing something? TIA!
2024-01-29T19:11:03
https://www.reddit.com/r/LocalLLaMA/comments/1ae3eci/ambitious_lllm_to_helpe_dealt_with_stuff/
Captain_Bacon_X
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae3eci
false
null
t3_1ae3eci
/r/LocalLLaMA/comments/1ae3eci/ambitious_lllm_to_helpe_dealt_with_stuff/
false
false
self
3
null
I created a python tool to easily load repos, docs or papers into an LLM
1
[removed]
2024-01-29T19:06:03
https://www.reddit.com/r/LocalLLaMA/comments/1ae3a0f/i_created_a_python_tool_to_easily_load_repos_docs/
jimmc414
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae3a0f
false
null
t3_1ae3a0f
/r/LocalLLaMA/comments/1ae3a0f/i_created_a_python_tool_to_easily_load_repos_docs/
false
false
https://b.thumbs.redditm…bilPXxmka30w.jpg
1
null
This can make a huge difference. Extending context from 4k to 400k. Llama-2-chat-7b.
56
Take a look on this [https://arxiv.org/pdf/2401.03462.pdf](https://arxiv.org/pdf/2401.03462.pdf) Github here [https://github.com/FlagOpen/FlagEmbedding/tree/master/Long\_LLM/activation\_beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon)
2024-01-29T19:03:30
https://www.reddit.com/r/LocalLLaMA/comments/1ae37sn/this_can_make_a_huge_difference_extending_context/
AIMatrixRedPill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae37sn
false
null
t3_1ae37sn
/r/LocalLLaMA/comments/1ae37sn/this_can_make_a_huge_difference_extending_context/
false
false
self
56
null
Processor for quad 4090 setup?
4
I am building a quad 4090 (24gb) setup, 256gb ram. What processor would you recommend? I guess some xeon or epyc but not sure abot which one would have the best price/performance ratio. I am ok spending up to 1500-2000€ on processor. Ideally I would like to run quatitized 70B model and do some CV training.
2024-01-29T18:50:16
https://www.reddit.com/r/LocalLLaMA/comments/1ae2w00/processor_for_quad_4090_setup/
Kyriales_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae2w00
false
null
t3_1ae2w00
/r/LocalLLaMA/comments/1ae2w00/processor_for_quad_4090_setup/
false
false
self
4
null
gpt4-125-preview appeared on LMSYS passing the apple test 24/25. A direct response to bard catching up?
1
2024-01-29T18:45:24
https://i.redd.it/92peb6drcffc1.png
kken
i.redd.it
1970-01-01T00:00:00
0
{}
1ae2rp6
false
null
t3_1ae2rp6
/r/LocalLLaMA/comments/1ae2rp6/gpt4125preview_appeared_on_lmsys_passing_the/
false
false
https://b.thumbs.redditm…6M2U-47Cg1cA.jpg
1
{'enabled': True, 'images': [{'id': 'pLXETHx4Pt7DMrxj0mM44QizAENa0R3aARrNOw71mMU', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/92peb6drcffc1.png?width=108&crop=smart&auto=webp&s=b987ce7ac8a1b0ddc6ccb0686bb577bf6d323695', 'width': 108}, {'height': 190, 'url': 'https://preview.redd.it/92peb6drcffc1.png?width=216&crop=smart&auto=webp&s=898c22516d25f465d7ab17268cfdec2d927a48e4', 'width': 216}, {'height': 281, 'url': 'https://preview.redd.it/92peb6drcffc1.png?width=320&crop=smart&auto=webp&s=d9a1559c76b894c37bc81486426709e394927ea1', 'width': 320}, {'height': 563, 'url': 'https://preview.redd.it/92peb6drcffc1.png?width=640&crop=smart&auto=webp&s=f8ac247f4283b7ab1561421f10381b3f2b3e0366', 'width': 640}], 'source': {'height': 685, 'url': 'https://preview.redd.it/92peb6drcffc1.png?auto=webp&s=06108d89982d3374843e68045ae83c304b197db9', 'width': 778}, 'variants': {}}]}
Alibaba announces Qwen-VL
43
2024-01-29T18:29:23
https://qwenlm.github.io/blog/qwen-vl/
Mandus_Therion
qwenlm.github.io
1970-01-01T00:00:00
0
{}
1ae2d6x
false
null
t3_1ae2d6x
/r/LocalLLaMA/comments/1ae2d6x/alibaba_announces_qwenvl/
false
false
default
43
null
TF-IDF + Embedding for RAGs?
1
Hi people, has anyone tried to filter our irrelevant tokens using TF-IDF and then embed documents for retrieval? Would it make sense or would I use more accuracy to the loss of semantics? Using azure search service for vector store and retrieval btw. Would be great to get some insights here
2024-01-29T18:22:50
https://www.reddit.com/r/LocalLLaMA/comments/1ae27d9/tfidf_embedding_for_rags/
hag_o_hi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae27d9
false
null
t3_1ae27d9
/r/LocalLLaMA/comments/1ae27d9/tfidf_embedding_for_rags/
false
false
self
1
null
Trying out Etheria 55b, but getting an odd output. Anyone had luck with it?
1
Has anyone tried Etheria 55b? I downloaded the q8 from [TheBloke's gguf page for it.](https://huggingface.co/TheBloke/Etheria-55b-v0.1-GGUF) I was pretty excited, because it's a Goliath 120b style merge for the Yi models. Unfortunately, I can't seem to get it to work in Oobabooga. No matter what instruction template I use or preset I use, I get 2000+ tokens of just ><fim\_middle> Over and over and over. Anyone had any luck with it?
2024-01-29T18:16:12
https://www.reddit.com/r/LocalLLaMA/comments/1ae217k/trying_out_etheria_55b_but_getting_an_odd_output/
SomeOddCodeGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae217k
false
null
t3_1ae217k
/r/LocalLLaMA/comments/1ae217k/trying_out_etheria_55b_but_getting_an_odd_output/
false
false
self
1
{'enabled': False, 'images': [{'id': 'x39YczXitygrna_vbmGLlMs_VK1R0iBX9hAxDutJvF4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/n862czSuJESrdX7R4r9ABWwb2QyiJ6XXP3vVvd1KxQc.jpg?width=108&crop=smart&auto=webp&s=cf53fa4957229a4eba0c28f61f99a40f12b9b5cf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/n862czSuJESrdX7R4r9ABWwb2QyiJ6XXP3vVvd1KxQc.jpg?width=216&crop=smart&auto=webp&s=f7c7e4448cf19077c5b4930d4b05b924800862e7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/n862czSuJESrdX7R4r9ABWwb2QyiJ6XXP3vVvd1KxQc.jpg?width=320&crop=smart&auto=webp&s=771617b16b92e4d5bfbf510113166a087e601a2d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/n862czSuJESrdX7R4r9ABWwb2QyiJ6XXP3vVvd1KxQc.jpg?width=640&crop=smart&auto=webp&s=3d08d186ce1db1e97b0f18831b3e6b41dab44de0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/n862czSuJESrdX7R4r9ABWwb2QyiJ6XXP3vVvd1KxQc.jpg?width=960&crop=smart&auto=webp&s=e06c0255a62298817da51c350993286a9c65902c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/n862czSuJESrdX7R4r9ABWwb2QyiJ6XXP3vVvd1KxQc.jpg?width=1080&crop=smart&auto=webp&s=dc4fad98760b2cadc9a533af2386e21303377da2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/n862czSuJESrdX7R4r9ABWwb2QyiJ6XXP3vVvd1KxQc.jpg?auto=webp&s=4e13800328d2424f13cee6fb1987668dc8ad167c', 'width': 1200}, 'variants': {}}]}
How to use AutoGen Studio with Text Gen (pictures included)
1
[removed]
2024-01-29T18:07:49
https://www.reddit.com/r/LocalLLaMA/comments/1ae1tfu/how_to_use_autogen_studio_with_text_gen_pictures/
Inevitable-Start-653
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae1tfu
false
null
t3_1ae1tfu
/r/LocalLLaMA/comments/1ae1tfu/how_to_use_autogen_studio_with_text_gen_pictures/
false
false
https://b.thumbs.redditm…GrMjUnuAlpFE.jpg
1
null
Working on documents summary and q&a (in house deployment) -need suggestion
3
Working on this project of Documents summarization and q&a. Suggest some good open source model with low memory requirements (currently working on llama 7b 4bit quantized model) One problem is that documents token length is 16k and model length is only 4k
2024-01-29T18:00:10
https://www.reddit.com/r/LocalLLaMA/comments/1ae1mal/working_on_documents_summary_and_qa_in_house/
Alert_Director_2836
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae1mal
false
null
t3_1ae1mal
/r/LocalLLaMA/comments/1ae1mal/working_on_documents_summary_and_qa_in_house/
false
false
self
3
null
Meta releases Code Llama2-70B, claims 67+ Humaneval
135
Meta has released the checkpoints of a new series of code models. They have the same llama 2 license. From their announcement: Today we’re releasing Code Llama 70B: a new, more performant version of our LLM for code generation — available under the same license as previous Code Llama models. Download the models ➡️ https://ai.meta.com/resources/models-and-libraries/llama-downloads/?utm_source=twitter&utm_medium=organic_social&utm_campaign=codellama&utm_content=image • CodeLlama-70B • CodeLlama-70B-Python • CodeLlama-70B-Instruct You can find the HF transformers checkpoints here https://huggingface.co/codellama
2024-01-29T17:38:38
https://huggingface.co/codellama
Amgadoz
huggingface.co
1970-01-01T00:00:00
0
{}
1ae13f7
false
null
t3_1ae13f7
/r/LocalLLaMA/comments/1ae13f7/meta_releases_code_llama270b_claims_67_humaneval/
false
false
https://b.thumbs.redditm…fzS92oSmL4Tk.jpg
135
{'enabled': False, 'images': [{'id': 'gmvczC1bSE838-GH8pLblnlMCkc5OMVllUs6haTvkb8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/jSSddtEy3YNYcaCtKmhyXjZYFZ4F48SglhbqdERHjks.jpg?width=108&crop=smart&auto=webp&s=c354ddb7c49f515a78b8cb787771ba6fad9dbcc2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/jSSddtEy3YNYcaCtKmhyXjZYFZ4F48SglhbqdERHjks.jpg?width=216&crop=smart&auto=webp&s=5684c3b1d94a8f08534b828c74a7312979e4e2b8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/jSSddtEy3YNYcaCtKmhyXjZYFZ4F48SglhbqdERHjks.jpg?width=320&crop=smart&auto=webp&s=8d3c3d292fbb5e9ba34ccf8e07f172217ca4b82c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/jSSddtEy3YNYcaCtKmhyXjZYFZ4F48SglhbqdERHjks.jpg?width=640&crop=smart&auto=webp&s=cf823c05cbbcf7eec252d1bcb4d09dc93386f2eb', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/jSSddtEy3YNYcaCtKmhyXjZYFZ4F48SglhbqdERHjks.jpg?width=960&crop=smart&auto=webp&s=c26932d1b43a3fa8c6a8747635e718d9a514315d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/jSSddtEy3YNYcaCtKmhyXjZYFZ4F48SglhbqdERHjks.jpg?width=1080&crop=smart&auto=webp&s=16f6b83feb3ad948f531b1848810f1ede6efb028', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/jSSddtEy3YNYcaCtKmhyXjZYFZ4F48SglhbqdERHjks.jpg?auto=webp&s=ba8864868c13da154c9211ce1149016af71c979c', 'width': 1200}, 'variants': {}}]}
How many epochs do you train an LLM for, in the case of a text completion dataset? I've always read that one epoch is optimal.
10
2024-01-29T17:28:44
https://i.redd.it/b51kxk6zxefc1.jpeg
91o291o
i.redd.it
1970-01-01T00:00:00
0
{}
1ae0uig
false
null
t3_1ae0uig
/r/LocalLLaMA/comments/1ae0uig/how_many_epochs_do_you_train_an_llm_for_in_the/
false
false
https://b.thumbs.redditm…vG3fzv7kD3Ik.jpg
10
{'enabled': True, 'images': [{'id': 'was5DoeesUN9yEGLVUND4uq2VCdUcqfxa9__CPrLuvw', 'resolutions': [{'height': 78, 'url': 'https://preview.redd.it/b51kxk6zxefc1.jpeg?width=108&crop=smart&auto=webp&s=17659533e7c17135e2603a428b6f8240f2ee425d', 'width': 108}, {'height': 156, 'url': 'https://preview.redd.it/b51kxk6zxefc1.jpeg?width=216&crop=smart&auto=webp&s=382837ba8495093e20b72daf5a626564bdc52f5b', 'width': 216}, {'height': 231, 'url': 'https://preview.redd.it/b51kxk6zxefc1.jpeg?width=320&crop=smart&auto=webp&s=065050e9b8338e1a027bd739749f0d000e4e6d03', 'width': 320}], 'source': {'height': 428, 'url': 'https://preview.redd.it/b51kxk6zxefc1.jpeg?auto=webp&s=f977e62cb91273d4fbcd1aa4f01c6c49ea4d8644', 'width': 591}, 'variants': {}}]}
LLM that knows about Stable Diffusion (XL) prompts?
5
Which LLMs does know about how to write a good prompt for Stable Diffusion / Stable Diffusion XL? The main, freely accessible ones (ChatGPT 3.5, BING and Bard) do know nothing. But are there probably some freely available at huggingface (GGUF prefered)?
2024-01-29T17:27:44
https://www.reddit.com/r/LocalLLaMA/comments/1ae0tmn/llm_that_knows_about_stable_diffusion_xl_prompts/
StableLlama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae0tmn
false
null
t3_1ae0tmn
/r/LocalLLaMA/comments/1ae0tmn/llm_that_knows_about_stable_diffusion_xl_prompts/
false
false
self
5
null
deploying mistral instruct v2 on aws sagemaker.
1
I have been trying to deploy mistral 7b instruct v2 on aws sagemaker using the code provided in the hugging face page of the model. The deployment has not been successful. The same code works with other models, but not with mistral instruct v2. Any thoughts on how to fix?
2024-01-29T17:15:34
https://www.reddit.com/r/LocalLLaMA/comments/1ae0ims/deploying_mistral_instruct_v2_on_aws_sagemaker/
Necessary_Student_15
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ae0ims
false
null
t3_1ae0ims
/r/LocalLLaMA/comments/1ae0ims/deploying_mistral_instruct_v2_on_aws_sagemaker/
false
false
self
1
null
"We’re releasing Code Llama 70B: the most performant version of our LLM for code generation to date...."
289
2024-01-29T17:08:55
https://www.linkedin.com/posts/aiatmeta_today-were-releasing-code-llama-70b-the-activity-7157779543989055488-vBQO?utm_source=share&utm_medium=member_desktop
phoneixAdi
linkedin.com
1970-01-01T00:00:00
0
{}
1ae0cfa
false
null
t3_1ae0cfa
/r/LocalLLaMA/comments/1ae0cfa/were_releasing_code_llama_70b_the_most_performant/
false
false
https://b.thumbs.redditm…oAFw0DOb_5Sk.jpg
289
{'enabled': False, 'images': [{'id': 'mDU7WbM5CSYahODtC3XdJpv2tctmY8h-MstL23eT7fM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/kfGe068g_1TiPRzPLIOK55LvQxeAqkun9M3clqJn6PE.jpg?width=108&crop=smart&auto=webp&s=f69acfa7d7d753c1a7dd6d3c2afdf1a57130ac7b', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/kfGe068g_1TiPRzPLIOK55LvQxeAqkun9M3clqJn6PE.jpg?width=216&crop=smart&auto=webp&s=3cb49a24c0ddbb108257eb49ba851299542f8991', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/kfGe068g_1TiPRzPLIOK55LvQxeAqkun9M3clqJn6PE.jpg?width=320&crop=smart&auto=webp&s=4d8fd45e566646eaecb35e53dddb27838ff84e82', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/kfGe068g_1TiPRzPLIOK55LvQxeAqkun9M3clqJn6PE.jpg?width=640&crop=smart&auto=webp&s=724264b088848272f056e8d2f326a84f031c5eee', 'width': 640}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/kfGe068g_1TiPRzPLIOK55LvQxeAqkun9M3clqJn6PE.jpg?auto=webp&s=9a3642c62dc4b8fbf7d9b6b2a2fcbf06f7067441', 'width': 800}, 'variants': {}}]}
Model trained to remove PII?
1
[removed]
2024-01-29T16:53:52
https://www.reddit.com/r/LocalLLaMA/comments/1adzyfk/model_trained_to_remove_pii/
FoolHooligan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1adzyfk
false
null
t3_1adzyfk
/r/LocalLLaMA/comments/1adzyfk/model_trained_to_remove_pii/
false
false
self
1
null
Pure C/C++ codes to train small GPT modes on a single computer?
1
[removed]
2024-01-29T16:42:07
https://www.reddit.com/r/LocalLLaMA/comments/1adzo4c/pure_cc_codes_to_train_small_gpt_modes_on_a/
Pitiful-You-8410
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1adzo4c
false
null
t3_1adzo4c
/r/LocalLLaMA/comments/1adzo4c/pure_cc_codes_to_train_small_gpt_modes_on_a/
false
false
self
1
null
Runpod alternative for a 24h service?
1
[removed]
2024-01-29T16:28:27
https://www.reddit.com/r/LocalLLaMA/comments/1adzcc6/runpod_alternative_for_a_24h_service/
Responsible-Sky8889
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1adzcc6
false
null
t3_1adzcc6
/r/LocalLLaMA/comments/1adzcc6/runpod_alternative_for_a_24h_service/
false
false
self
1
null
Reor: an AI personal knowledge management app powered by local models
70
Reor is an open-source AI personal knowledge management app that runs models locally. I wanted to build this because AI is the next step for organising unstructured notes but no one is talking about local models...We should be building tools that are offline & local by default, not OpenAI default! The three main things to know are: 1. Notes are connected automatically with vector similarity. In the sidebar, it shows the similar notes to the one you are currently editing. 2. You can Q&A your notes using the local LLM of your choice. 3. Embedding model, LLM, vector db and files are all run or stored locally. Under the hood, Reor uses Llama.cpp (node-llama-cpp integration), Transformers.js and Lancedb to power the local AI features. It's available for Mac, Windows & Linux on the project Github: [https://github.com/reorproject/reor](https://github.com/reorproject/reor).
2024-01-29T16:27:53
https://www.reddit.com/r/LocalLLaMA/comments/1adzbu7/reor_an_ai_personal_knowledge_management_app/
undamp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1adzbu7
false
null
t3_1adzbu7
/r/LocalLLaMA/comments/1adzbu7/reor_an_ai_personal_knowledge_management_app/
false
false
self
70
null