title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Would a merge between Neural Chat 7B v3.1 and OpenHermes-2.5 work?
7
I'm curious whether that would work and someone might already try. They are both finetunes from mistral, so i would imagine. I have a feeling that this frankenmerge could produce a very good small billion parameter model that might be better than any current <=14b.
2023-11-26T11:37:12
https://www.reddit.com/r/LocalLLaMA/comments/1848vbx/would_a_merge_between_neural_chat_7b_v31_and/
StrangeImagination5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1848vbx
false
null
t3_1848vbx
/r/LocalLLaMA/comments/1848vbx/would_a_merge_between_neural_chat_7b_v31_and/
false
false
self
7
null
Relationship of RAM to context size?
8
I understand that a bigger memory means you can run a model with more parameters or less compression, but how does context size factor in? I believe it's possible to increase the context size, and that this will increase the initial processing before the model starts outputting tokens, but does someone have numbers? Is memory for context independent on the model size, or does a bigger model mean that each bit of extra context 'costs' more memory? I'm considering an M2 ultra for the large memory and low energy/token, although the speed is behind RTX cards. Is this the best option for tasks like writing novels?
2023-11-26T11:26:25
https://www.reddit.com/r/LocalLLaMA/comments/1848puo/relationship_of_ram_to_context_size/
EvokerTCG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1848puo
false
null
t3_1848puo
/r/LocalLLaMA/comments/1848puo/relationship_of_ram_to_context_size/
false
false
self
8
null
Let's speak theory. Exploring the Potential of Collaborative Training?
1
Hi, you wonderful people! Here's a thought that came to my mind: Since training LLMs involves a degree of randomness, is there potentially a way to create an architecture for LLMs (or other AI) that would be somewhat **deterministic** in its training instead? What I mean is, could a theoretical architecture exist where everyone could train their own separate checkpoints on different datasets, which, after combining, would result in a checkpoint with **combined learning** from all these different smaller checkpoints? What this would allow us to do is let thousands of people create their own checkpoints, which when combined would result in something greater than the individual parts themselves. And since the training process is what takes the longest in developing LLMs (or any AI), this approach would allow *almost* everyone to contribute their share of processing power towards creating something **together.** If viable, this could have huge potential implications for **Open Source Software.** I'm looking forward to hearing what all of you smart people have to say about it!
2023-11-26T11:25:39
https://www.reddit.com/r/LocalLLaMA/comments/1848pgy/lets_speak_theory_exploring_the_potential_of/
paryska99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1848pgy
false
null
t3_1848pgy
/r/LocalLLaMA/comments/1848pgy/lets_speak_theory_exploring_the_potential_of/
false
false
self
1
null
Need help validating an initial idea using RAG
1
Hello local llama friends I am a data scientist with experience in mainly tabular data, never really worked with deep learning or LLMs. At my current job, I have this project where I need to create a PII misuse detector at codebasws, such as: - Using HTTP requests - Look for direct database operations, especially INSERT or UPDATE statements, which might insert PII into a database - Review APIs that accept data from users. Ensure they have proper validation and are correctly collecting PII - Review logging mechanisms to ensure PII isn't being logged inadvertently. They want to use LLM, and I was assigned to this project, and after some research and some ideas, what firstly came to my mind was: Store the whole database on a vector store such as pinecone, and RAG over the vectors with search over the code, and get a return on possible PII results. Any thoughts or ideas would be great. Thanks.
2023-11-26T10:46:25
https://www.reddit.com/r/LocalLLaMA/comments/18484qd/need_help_validating_an_initial_idea_using_rag/
dick_veganas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18484qd
false
null
t3_18484qd
/r/LocalLLaMA/comments/18484qd/need_help_validating_an_initial_idea_using_rag/
false
false
self
1
null
LLM Web-UI recommendations
143
So far, I have experimented with the following projects: [https://github.com/huggingface/chat-ui](https://github.com/huggingface/chat-ui) \- Amazing clean UI with very good web search, my go to currently. [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) \- Best overall, supports any model format and has many extensions [https://github.com/ParisNeo/lollms-webui/](https://github.com/ParisNeo/lollms-webui/) \- Has PDF, stable diffusion and web search integration [https://github.com/h2oai/h2ogpt](https://github.com/h2oai/h2ogpt) \- Has PDF, Web search, best for files ingestion (supports many file formats) [https://github.com/SillyTavern/SillyTavern](https://github.com/SillyTavern/SillyTavern) \- Best for custom characters and roleplay [https://github.com/NimbleBoxAI/ChainFury](https://github.com/NimbleBoxAI/ChainFury) \- Has great UI and web search (experimental) [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all) \- Basic UI that replicated ChatGPT [https://github.com/imartinez/privateGPT](https://github.com/imartinez/privateGPT) \- Basic UI that replicated ChatGPT with PDF integration LM Studio \- Clean UI, focuses on GGUF format \- Really love them and wondering if there are any other great projects, Some of them include full web search and PDF integrations, some are more about characters, or for example oobabooga is the best at trying every single model format there is as it supports anything. What is your favorite project to interact with your large language models ? Share your findings and il add them!
2023-11-26T10:19:21
https://www.reddit.com/r/LocalLLaMA/comments/1847qt6/llm_webui_recommendations/
iChrist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1847qt6
false
null
t3_1847qt6
/r/LocalLLaMA/comments/1847qt6/llm_webui_recommendations/
false
false
self
143
{'enabled': False, 'images': [{'id': 'SPXRqkA3QsirejmMZJGPZhb0DJ6XfG4tK30E7hKbZ68', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=108&crop=smart&auto=webp&s=41694e01bde6cb391067701790469366bda45a6a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=216&crop=smart&auto=webp&s=022008af5fab90229cb9ee10a932325c935ec000', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=320&crop=smart&auto=webp&s=a04b205d4579b1a1122e7a0e604707156e36b6f5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=640&crop=smart&auto=webp&s=2b19d773e972d00e30b2c2ce402dd9c84ca04463', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=960&crop=smart&auto=webp&s=d215d4f9f23abd425e6160e7f892bb7dc53806c4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=1080&crop=smart&auto=webp&s=ecd2ff5959aa85d9483d6fceef2139101ee6f0e4', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?auto=webp&s=9113d7f64e38e4fc6cbcaa74f7806d8c3f0d53da', 'width': 1280}, 'variants': {}}]}
LLM Web-UI recommendations
2
So far, I have experimented with the following projects: [https://github.com/huggingface/chat-ui](https://github.com/huggingface/chat-ui) \- Amazing clean UI with very good web search, my go to currently. [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) \- Best overall, supports any model format and has many extensions [https://github.com/ParisNeo/lollms-webui/](https://github.com/ParisNeo/lollms-webui/) \- Has PDF, stable diffusion and web search integration [https://github.com/h2oai/h2ogpt](https://github.com/h2oai/h2ogpt) \- Has PDF, Web search, best for files ingestion (supports many file formats) [https://github.com/SillyTavern/SillyTavern](https://github.com/SillyTavern/SillyTavern) \- Best for custom characters and roleplay [https://github.com/NimbleBoxAI/ChainFury](https://github.com/NimbleBoxAI/ChainFury) \- Has great UI and web search (experimental) [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all) \- Basic UI that replicated ChatGPT [https://github.com/imartinez/privateGPT](https://github.com/imartinez/privateGPT) \- Basic UI that replicated ChatGPT with PDF integration LM Studio - Clean UI, focuses on GGUF format Really love them and wondering if there are any other great projects, Some of them include full web search and PDF integrations, some are more about characters, or for example oobabooga is the best at trying every single model format there is as it supports anything. What is your favorite project to interact with your large language models ? Share your findings and il add them!
2023-11-26T10:09:11
https://www.reddit.com/r/LocalLLaMA/comments/1847lif/llm_webui_recommendations/
iChrist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1847lif
false
null
t3_1847lif
/r/LocalLLaMA/comments/1847lif/llm_webui_recommendations/
false
false
self
2
{'enabled': False, 'images': [{'id': 'SPXRqkA3QsirejmMZJGPZhb0DJ6XfG4tK30E7hKbZ68', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=108&crop=smart&auto=webp&s=41694e01bde6cb391067701790469366bda45a6a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=216&crop=smart&auto=webp&s=022008af5fab90229cb9ee10a932325c935ec000', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=320&crop=smart&auto=webp&s=a04b205d4579b1a1122e7a0e604707156e36b6f5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=640&crop=smart&auto=webp&s=2b19d773e972d00e30b2c2ce402dd9c84ca04463', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=960&crop=smart&auto=webp&s=d215d4f9f23abd425e6160e7f892bb7dc53806c4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=1080&crop=smart&auto=webp&s=ecd2ff5959aa85d9483d6fceef2139101ee6f0e4', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?auto=webp&s=9113d7f64e38e4fc6cbcaa74f7806d8c3f0d53da', 'width': 1280}, 'variants': {}}]}
What WebUI you use to interact with your LLMs?
1
[removed]
2023-11-26T09:26:37
https://www.reddit.com/r/LocalLLaMA/comments/18470ik/what_webui_you_use_to_interact_with_your_llms/
iChrist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18470ik
false
null
t3_18470ik
/r/LocalLLaMA/comments/18470ik/what_webui_you_use_to_interact_with_your_llms/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SPXRqkA3QsirejmMZJGPZhb0DJ6XfG4tK30E7hKbZ68', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=108&crop=smart&auto=webp&s=41694e01bde6cb391067701790469366bda45a6a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=216&crop=smart&auto=webp&s=022008af5fab90229cb9ee10a932325c935ec000', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=320&crop=smart&auto=webp&s=a04b205d4579b1a1122e7a0e604707156e36b6f5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=640&crop=smart&auto=webp&s=2b19d773e972d00e30b2c2ce402dd9c84ca04463', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=960&crop=smart&auto=webp&s=d215d4f9f23abd425e6160e7f892bb7dc53806c4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=1080&crop=smart&auto=webp&s=ecd2ff5959aa85d9483d6fceef2139101ee6f0e4', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?auto=webp&s=9113d7f64e38e4fc6cbcaa74f7806d8c3f0d53da', 'width': 1280}, 'variants': {}}]}
RAG beginner questions
25
Hi, I'm trying to understand all the stuff you're talking about. I have no ambitions of actually implementing anything. And I'm rather a beginner in the field. With a few questions about retrieval-augmented generation: I think I understand that RAG means that the shell around the LLM proper (say, the ChatGPT web app) uses your prompt to search for relevant documents in a vector database that is storing embeddings (vectors in a high-dimensional semantic ("latent") space), gets the most relevant embeddings (encoded chunks of documents) and feeds them into the LLM as (user-invisible) part of the prompt. 1. Why do we need embeddings here? We could use a regular text search, say in Solr, and stuff the prompt with human-readable documents. Is it just because embeddings compress the documents? Is it because the transformer's encoder makes embeddings out of it anyway, so you can skip that step? On the other hand, having the documents user-readable (and usable for regular search in other applications) would be a plus, wouldn't it? 2. If we get back embeddings from the database we cannot simply prepend the result to the prompt, can we? Because embeddings are something different than user input, it needs to "skip" the encoder part, right? Or can LLMs handle embeddings in user prompt, as they seem to be able to handle base64 sometimes? I'm quite confused here, because all those introductory articles seem to say that the retrieval result is prepended to the prompt, but that is only a conceptual view, isn't it? 3. If embeddings need to "skip" part of the LLM, doesn't that mean that a RAG-enabled system cannot be a mere wrapper around a closed LLM, but that the LLM needs to change its implementation/architecture, if only slightly? 4. What exactly is so difficult? I'm always reading about how RAG is highly difficult, with chunking and tuning and so on. Is it difficult because the relevance search task is difficult, so that it is just as difficult as a regular text relevance search with result snippets and facetting and so on, or is there "more" difficulty? Especially re: chunking, what is the conceptual difference between chunking in the vector database and breaking up documents in regular search? Thank you for taking some time out of your Sunday to answer "stupid" questions!
2023-11-26T09:21:18
https://www.reddit.com/r/LocalLLaMA/comments/1846y0u/rag_beginner_questions/
PersonSuitLevitating
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1846y0u
false
null
t3_1846y0u
/r/LocalLLaMA/comments/1846y0u/rag_beginner_questions/
false
false
self
25
null
Yann LeCun on why AI must be open source - video upload
422
2023-11-26T08:53:51
https://v.redd.it/v1ot5kq3pn2c1
onil_gova
v.redd.it
1970-01-01T00:00:00
0
{}
1846ksi
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/v1ot5kq3pn2c1/DASHPlaylist.mpd?a=1703667239%2CNWJiMjU4OWFkOWExYTQwYzJkNTI2MjRlYTAxMDc4NjQwMDNkODgwNmM2YjQ1YmEyYTg0Y2UzMzhhNGZhOGRlOQ%3D%3D&v=1&f=sd', 'duration': 285, 'fallback_url': 'https://v.redd.it/v1ot5kq3pn2c1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/v1ot5kq3pn2c1/HLSPlaylist.m3u8?a=1703667239%2CNzQyZTQ5ZDkzYWU2M2EzOWZmZmNjZDZkYjZjZjhhMmI3Y2U5YjM3ZjY3MTMzZTcxMTc3NjAwYzk3OGEwNjhjNA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/v1ot5kq3pn2c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1846ksi
/r/LocalLLaMA/comments/1846ksi/yann_lecun_on_why_ai_must_be_open_source_video/
false
false
https://external-preview…0164247e4e381f32
422
{'enabled': False, 'images': [{'id': 'M2tmdHRpbjNwbjJjMVYohI1ZCNcyobfsoZRixj-szdtXks3w--h76aInUdX0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M2tmdHRpbjNwbjJjMVYohI1ZCNcyobfsoZRixj-szdtXks3w--h76aInUdX0.png?width=108&crop=smart&format=pjpg&auto=webp&s=17c5212b04481924286873c8c6f8b7587f92d04d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/M2tmdHRpbjNwbjJjMVYohI1ZCNcyobfsoZRixj-szdtXks3w--h76aInUdX0.png?width=216&crop=smart&format=pjpg&auto=webp&s=d9591535bbbbd8ed1620616218dd20a69e9de516', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/M2tmdHRpbjNwbjJjMVYohI1ZCNcyobfsoZRixj-szdtXks3w--h76aInUdX0.png?width=320&crop=smart&format=pjpg&auto=webp&s=c325bb3cef480cced9e395b750f1dd4f6ecc925b', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/M2tmdHRpbjNwbjJjMVYohI1ZCNcyobfsoZRixj-szdtXks3w--h76aInUdX0.png?width=640&crop=smart&format=pjpg&auto=webp&s=7f855e1d07b9c446a26739c8146dd931d719a566', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/M2tmdHRpbjNwbjJjMVYohI1ZCNcyobfsoZRixj-szdtXks3w--h76aInUdX0.png?width=960&crop=smart&format=pjpg&auto=webp&s=8973342f7b11be37439ec1d6ee9c717732cbe10e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/M2tmdHRpbjNwbjJjMVYohI1ZCNcyobfsoZRixj-szdtXks3w--h76aInUdX0.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c51f4e6ec0d40b7c4aa0ec6992b840c0bc451e61', 'width': 1080}], 'source': {'height': 1019, 'url': 'https://external-preview.redd.it/M2tmdHRpbjNwbjJjMVYohI1ZCNcyobfsoZRixj-szdtXks3w--h76aInUdX0.png?format=pjpg&auto=webp&s=ea1b24f9935663c2c3220f609895fbffcf8f4fe1', 'width': 1812}, 'variants': {}}]}
Web Search Functionality For Local LLM's Implemented To Near Perfection Levels
1
[removed]
2023-11-26T08:27:51
https://www.reddit.com/r/LocalLLaMA/comments/18468hv/web_search_functionality_for_local_llms/
No_Marionberry312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18468hv
false
null
t3_18468hv
/r/LocalLLaMA/comments/18468hv/web_search_functionality_for_local_llms/
false
false
https://b.thumbs.redditm…I68goq7JYdug.jpg
1
{'enabled': False, 'images': [{'id': 'COdJtOFfs2RSQURFqTOPkatxRaOHjoXEwyklCG9Y8RY', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/ml9GBuKe3fUigP-T7StFNtNQ0tYz1yRVQOQnq7CP0xU.jpg?width=108&crop=smart&auto=webp&s=1a09469a5fef8dc65470d553253652804e82e9ed', 'width': 108}, {'height': 91, 'url': 'https://external-preview.redd.it/ml9GBuKe3fUigP-T7StFNtNQ0tYz1yRVQOQnq7CP0xU.jpg?width=216&crop=smart&auto=webp&s=5b9b6ee95d4103d883387c3b622b42f1942fea82', 'width': 216}, {'height': 135, 'url': 'https://external-preview.redd.it/ml9GBuKe3fUigP-T7StFNtNQ0tYz1yRVQOQnq7CP0xU.jpg?width=320&crop=smart&auto=webp&s=2dbc0c8efbcd8c76e4edac67ff12aaf905a1eba4', 'width': 320}, {'height': 271, 'url': 'https://external-preview.redd.it/ml9GBuKe3fUigP-T7StFNtNQ0tYz1yRVQOQnq7CP0xU.jpg?width=640&crop=smart&auto=webp&s=f99ace7b2189ee2b613772f7e797a55c34cc713d', 'width': 640}, {'height': 407, 'url': 'https://external-preview.redd.it/ml9GBuKe3fUigP-T7StFNtNQ0tYz1yRVQOQnq7CP0xU.jpg?width=960&crop=smart&auto=webp&s=59b9c3f5188adcbe2b7efc308ffc44fe6e55b295', 'width': 960}, {'height': 458, 'url': 'https://external-preview.redd.it/ml9GBuKe3fUigP-T7StFNtNQ0tYz1yRVQOQnq7CP0xU.jpg?width=1080&crop=smart&auto=webp&s=2ac0bdbc0d1da773274122112bc28b2594b9dfdd', 'width': 1080}], 'source': {'height': 856, 'url': 'https://external-preview.redd.it/ml9GBuKe3fUigP-T7StFNtNQ0tYz1yRVQOQnq7CP0xU.jpg?auto=webp&s=dce9c3df9284656c0d75361bae740c77e4b1443c', 'width': 2016}, 'variants': {}}]}
Yann LeCun on why AI must be open source
112
2023-11-26T07:56:15
https://twitter.com/JosephJacks_/status/1728510229133119644/mediaViewer?currentTweet=1728510229133119644&currentTweet=1728510229133119644&currentTweetUser=JosephJacks_&currentTweetUser=JosephJacks_
onil_gova
twitter.com
1970-01-01T00:00:00
0
{}
1845scc
false
null
t3_1845scc
/r/LocalLLaMA/comments/1845scc/yann_lecun_on_why_ai_must_be_open_source/
false
false
default
112
null
I made this Kobold United compatible front end that doesn't require a web browser. Introducing: Clipboard Conqueror, an anywhere copilot. Accessible from any selectable text field, its great for smashing the language barrier inside games without needing to tab out. All free.
1
2023-11-26T07:48:26
https://github.com/aseichter2007/ClipboardConqueror
aseichter2007
github.com
1970-01-01T00:00:00
0
{}
1845okt
false
null
t3_1845okt
/r/LocalLLaMA/comments/1845okt/i_made_this_kobold_united_compatible_front_end/
false
false
https://a.thumbs.redditm…ERT6GeLz-Qe0.jpg
1
{'enabled': False, 'images': [{'id': 'FZj7Wp8SilGmNgFAEus2cstF-4pL4PjnpY4jrDVpmFY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=108&crop=smart&auto=webp&s=952961fc8cf6b112f511b6a2042021f83dc4cb22', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=216&crop=smart&auto=webp&s=94c52d0ca8f705211d0a3c0ab43649d912ffdf83', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=320&crop=smart&auto=webp&s=438eee6b25fec6f089a9b06dece1983094b9f9aa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=640&crop=smart&auto=webp&s=9117b62c46767b4736a7624d3527bbe0f03fef8e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=960&crop=smart&auto=webp&s=7f1db43d5dcf56418ad1c82865349341538542f7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=1080&crop=smart&auto=webp&s=c260233bb1780dae85f27c2a689056d6c32a13f4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?auto=webp&s=f380688244428ddf1079c716b2d422e38e930b13', 'width': 1200}, 'variants': {}}]}
Trying to find a LLM that can run a D&D Campaign
2
I've searched around and was led to this sub through Google, but most people seem to be trying to replicate the AI Dungeon experience. I've never actually played D&D but I had ChatGPT run a campaign for me before it got lobotomized. It was actually pretty damn good, it remembered and kept track of things a lot better and it seemed to have more freedom and creativity with its choices, even getting a bit risque at times when I never asked for it. Anyways, I just resubbed to GPT since I saw you could make your own GPT model. I have a ton of pdfs with official D&D materials, so I fed it the 5th edition rulebook and a campaign I wanted to play. It didn't seem to make any difference, it couldn't even keep track of my characters stats, let alone modifiers. And creatively, it was definitely neutered. To cut to the chase, I've essentially summed up my use case here. I want to use Llama to find a model where I can feed it official D&D materials and have it run a campaign for me. I know there are multiple models depending on your expertise and system hardware. I am a newbie but no stranger to tech, so I'm willing to give any model a try. As for my system, I have a 3080ti, i9-12900KF, and 32GB DDR4-3600 RAM. Thanks for any suggestions or guidance in advance!
2023-11-26T07:19:37
https://www.reddit.com/r/LocalLLaMA/comments/1845abp/trying_to_find_a_llm_that_can_run_a_dd_campaign/
Any_Influence_8305
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1845abp
false
null
t3_1845abp
/r/LocalLLaMA/comments/1845abp/trying_to_find_a_llm_that_can_run_a_dd_campaign/
false
false
self
2
null
Good, cheap cloud providers for llm work
14
What are some good options these days for llm work - primarily for fine tuning and other related experiments? This for personal work and proof of concept type stuff and will be out of pocket so I’d definitely prefer cheaper options. I’d mostly be using 7-13b models but later would want to test with larger models as well. Most of the providers have on demand and spot options, with spot options being obviously cheaper. I understand the spot instances can go down at any time but assuming checkpoints are saved regularly and can resume later that shouldn’t be a big problem. Are there any gotchas here? The other criteria is managed/secure environment vs some kind of open/community environment. Again the later options are cheaper and assuming security is not a major requirement that seems like the better choice. Any thoughts or advice on this one? I’m mostly looking at runpod, vast, and replicate based on info from other threads. Are there any other providers folks had good experience with? How do AWS, GCP, or Azure compare to these options? From what I can tell these seem more expensive but I haven’t looked at these too closely. Any recommendations with some details on your own experience, use cases, and costs would be greatly appreciated.
2023-11-26T06:49:12
https://www.reddit.com/r/LocalLLaMA/comments/1844to2/good_cheap_cloud_providers_for_llm_work/
gamesntech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1844to2
false
null
t3_1844to2
/r/LocalLLaMA/comments/1844to2/good_cheap_cloud_providers_for_llm_work/
false
false
self
14
null
What is the best model for PowerShell?
1
Is there a good model for helping to write PowerShell Scripts? I've tested couple of models that had 'coder' in their name, but they made basic syntax mistakes and even simple scripts were not working. I am using OogaBooga text-generation-webui locally. A model that works with max 24GB VRAM would be great. I'm new at this at the moment.
2023-11-26T06:11:30
https://www.reddit.com/r/LocalLLaMA/comments/18448ml/what_is_the_best_model_for_powershell/
House_MD_PL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18448ml
false
null
t3_18448ml
/r/LocalLLaMA/comments/18448ml/what_is_the_best_model_for_powershell/
false
false
self
1
null
Alternatives to LLamaSharp?
7
Hi I am a newbie c# dev, I am trying to create a home project and until recently I was using [Llamasharp](https://github.com/SciSharp/LLamaSharp). There is little support for it and since the last updates I've been unable to get it to work at all. i'm trying to build a little chat wpf application which can either load AWQ or GGUF LLM files. Are there any simple and easy to use libraries out there which I can facilitate in c#? https://preview.redd.it/6tx5ij2imm2c1.jpg?width=877&format=pjpg&auto=webp&s=53e2a07f53e5d7e15ebbe727d6930bfd3bbea25b &#x200B; &#x200B;
2023-11-26T05:18:22
https://www.reddit.com/r/LocalLLaMA/comments/1843d6z/alternatives_to_llamasharp/
kimberly1818
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1843d6z
false
null
t3_1843d6z
/r/LocalLLaMA/comments/1843d6z/alternatives_to_llamasharp/
false
false
https://b.thumbs.redditm…iTt3NmWo5g7I.jpg
7
{'enabled': False, 'images': [{'id': 'cAx52B6wf4njzl-hqw8T6j8qo65MY3z6ZNyxlMrSzm4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2EkX2pladm9_QuSDa7OTACVgzbIbb4t2aDZT-d7k0NM.jpg?width=108&crop=smart&auto=webp&s=8acfd0bdc8fbf0e2695ad0e60183f1bcc3972950', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2EkX2pladm9_QuSDa7OTACVgzbIbb4t2aDZT-d7k0NM.jpg?width=216&crop=smart&auto=webp&s=7e139eb7e40f97198663d84ceddcbe8641855538', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2EkX2pladm9_QuSDa7OTACVgzbIbb4t2aDZT-d7k0NM.jpg?width=320&crop=smart&auto=webp&s=c7dbcf575f7002bb2006c6c2f3ee47500f1d3acf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2EkX2pladm9_QuSDa7OTACVgzbIbb4t2aDZT-d7k0NM.jpg?width=640&crop=smart&auto=webp&s=7313a23d8cdaaaa8a694cfc34289014cc73eedb4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2EkX2pladm9_QuSDa7OTACVgzbIbb4t2aDZT-d7k0NM.jpg?width=960&crop=smart&auto=webp&s=4e179aa94e0d2d073987cdc7816f72871517069d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2EkX2pladm9_QuSDa7OTACVgzbIbb4t2aDZT-d7k0NM.jpg?width=1080&crop=smart&auto=webp&s=c336a88ee278a343289d420e75637b98ac446960', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2EkX2pladm9_QuSDa7OTACVgzbIbb4t2aDZT-d7k0NM.jpg?auto=webp&s=aa605623ceed602f39c7462552d77d679f9238b0', 'width': 1200}, 'variants': {}}]}
struggling to include text prompts along with image-data (multimodal) for inferencing
2
I spun up a simple project (home surveillance system) to play around with [ShareGPT4V-7B](https://github.com/InternLM/InternLM-XComposer/tree/main/projects/ShareGPT4V) and made quite a bit of progress over the last few days. However, I'm having a really hard time figuring out how to send a simple prompt along with the image-to-text request. Here is the relevant code: \`\`\`javascript document.**getElementById**('send-chat').**addEventListener**('click', async () => { const message = document.**getElementById**('chat-input').value; **appendUserMessage**(message); document.**getElementById**('chat-input').value = ''; *// Clear the input field* const imageElement = document.**getElementById**('frame-display'); const imageUrl = imageElement.style.backgroundImage.**slice**(5, -2); *// Extract URL from the backgroundImage style* try { const imageBlob = await **fetch**(imageUrl).**then**(res => res.**blob**()); const reader = new **FileReader**(); reader.**onloadend** = async () => { const base64data = reader.result.**split**(',')\[1\]; *// Split to get the Base64 content* const imageData = { data: base64data, id: 1 *// Assuming a fixed ID for simplicity* }; const payload = { prompt: message, image\_data: \[imageData\], n\_predict: 256, *// Assuming a fixed value for simplicity* top\_p: 0.5, temp: 0.2 }; const response = await **fetch**("http://localhost:8080/completion", { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.**stringify**(payload) }); const data = await response.**json**(); console.**log**(data); *// Handle the response from the API* **appendAiResponse**(data.content); }; reader.**readAsDataURL**(imageBlob); } catch (error) { console.**error**('Error encoding image or sending request:', error); } }); \`\`\` The only thing that works is sending an empty space or sometimes a question mark and i'll get a general interpretation of the image but what I really want is to be able to instruct the model so it knows what to look for. Is that something that's currently possible? basically system prompting the vision model. &#x200B;
2023-11-26T04:37:02
https://www.reddit.com/r/LocalLLaMA/comments/1842nz3/struggling_to_include_text_prompts_along_with/
LyPreto
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1842nz3
false
null
t3_1842nz3
/r/LocalLLaMA/comments/1842nz3/struggling_to_include_text_prompts_along_with/
false
false
self
2
{'enabled': False, 'images': [{'id': 'isHp1QY7I0dKNBpG-ri7TJA3i0gsGeQU1VDWXlPrG14', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kOfr-fmuQjBFV9mBWg7LCZn-NBY__vRqEto89QftHH4.jpg?width=108&crop=smart&auto=webp&s=eda1ad2ccec6b6ea213249bad08c0b755ba668fe', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kOfr-fmuQjBFV9mBWg7LCZn-NBY__vRqEto89QftHH4.jpg?width=216&crop=smart&auto=webp&s=18e66aaac55b3f4e1ef112ba307719d313ed0524', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kOfr-fmuQjBFV9mBWg7LCZn-NBY__vRqEto89QftHH4.jpg?width=320&crop=smart&auto=webp&s=1214bc443c602432e830d95f7f3a9ce7e17bfdea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kOfr-fmuQjBFV9mBWg7LCZn-NBY__vRqEto89QftHH4.jpg?width=640&crop=smart&auto=webp&s=3fe7e795f63d90f5052c09575374dc181dee7850', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kOfr-fmuQjBFV9mBWg7LCZn-NBY__vRqEto89QftHH4.jpg?width=960&crop=smart&auto=webp&s=338e2c63c84fb1dac3296b26fd305346e3962ebe', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kOfr-fmuQjBFV9mBWg7LCZn-NBY__vRqEto89QftHH4.jpg?width=1080&crop=smart&auto=webp&s=a675c11603c105fbe17ad90cfda3dc0a36e415a1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kOfr-fmuQjBFV9mBWg7LCZn-NBY__vRqEto89QftHH4.jpg?auto=webp&s=37c6c792074095c2f9291ca899648eb8216fd9dc', 'width': 1200}, 'variants': {}}]}
Proposal of LLM hosted in a co-funded host
2
Hello: I saw an idea about getting a big LLM (30/44 Gb) running fast in a cloud server. What if this server would be scalable in potency and the renting shared in a group of united users? Some sort of DAO to get it started? Personally i would love to link advanced LMS's up to SD generation etc. And OpenAI is too sensitive for my liking. What do you think?
2023-11-26T03:26:19
https://www.reddit.com/r/LocalLLaMA/comments/1841ens/proposal_of_llm_hosted_in_a_cofunded_host/
DanIngenius
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1841ens
false
null
t3_1841ens
/r/LocalLLaMA/comments/1841ens/proposal_of_llm_hosted_in_a_cofunded_host/
false
false
self
2
null
Ollama slow running on high GPU Server - is the GPU not being utilized?
2
Ollama-13B is running slowly on a cloud server that has a 32 GB Nvidia Tesla V100S. Do I need to changing my configuration to properly utilize the GPU memory?
2023-11-26T03:10:34
https://www.reddit.com/r/LocalLLaMA/comments/184143y/ollama_slow_running_on_high_gpu_server_is_the_gpu/
Smart-Egg-2568
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
184143y
false
null
t3_184143y
/r/LocalLLaMA/comments/184143y/ollama_slow_running_on_high_gpu_server_is_the_gpu/
false
false
self
2
null
Venus-120b: A merge of three different models in the style of Goliath-120b
99
Hi everyone, I'd like to share something that I've been working on for the past few days: https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.0 This model is the result of interleaving layers from three different models: Euryale-1.3-L2-70B, Nous-Hermes-Llama2-70b, and SynthIA-70B-v1.5, resulting in a model that it larger than any of the three used for the merge. I have branches on the repo for exl2 quants at 3.0 and 4.85 bpw, which will allow the model to run in 48GB or 80GB of vram, respectively. I love using LLMs for RPs and ERPs and so my goal was to create something similar to Goliath, which is honestly the best roleplay model I've ever used. I've done some initial testing with it and so far the results seem encouraging. I'd love to get some feedback on this from the community! Going forward, my plan is to do more experiments with merging models together, possibly even going even larger than 120b parameters to see where the gains stop.
2023-11-26T02:59:52
https://www.reddit.com/r/LocalLLaMA/comments/1840wg5/venus120b_a_merge_of_three_different_models_in/
nsfw_throwitaway69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1840wg5
false
null
t3_1840wg5
/r/LocalLLaMA/comments/1840wg5/venus120b_a_merge_of_three_different_models_in/
false
false
self
99
{'enabled': False, 'images': [{'id': '9i3WT9LfPfeM5rGrKzC3q3x0aS6euIJ6cnFmz7v6IRg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/817EctUWAk5VhVVHENHWG9dGMxRT9Co8PIjG4nxZlBE.jpg?width=108&crop=smart&auto=webp&s=44dcbdc19b7496af239e64077fa2debecb4669a5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/817EctUWAk5VhVVHENHWG9dGMxRT9Co8PIjG4nxZlBE.jpg?width=216&crop=smart&auto=webp&s=2839aa26130f2a56e6821ed9943daad0633f99ad', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/817EctUWAk5VhVVHENHWG9dGMxRT9Co8PIjG4nxZlBE.jpg?width=320&crop=smart&auto=webp&s=74c6729945aa8dabb04e6ba200596f638ef8ee61', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/817EctUWAk5VhVVHENHWG9dGMxRT9Co8PIjG4nxZlBE.jpg?width=640&crop=smart&auto=webp&s=2b3bb794ac43180dbb7bf2908af37334b188d289', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/817EctUWAk5VhVVHENHWG9dGMxRT9Co8PIjG4nxZlBE.jpg?width=960&crop=smart&auto=webp&s=8e41d6670cb69e086cd586f0a79b14c536b0d661', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/817EctUWAk5VhVVHENHWG9dGMxRT9Co8PIjG4nxZlBE.jpg?width=1080&crop=smart&auto=webp&s=a6a42a38d049a8783bb07f3fef773ef25da23ff0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/817EctUWAk5VhVVHENHWG9dGMxRT9Co8PIjG4nxZlBE.jpg?auto=webp&s=f8b13e7fea64a3a729c8f13acd6d5b4aacd7a6ab', 'width': 1200}, 'variants': {}}]}
Chat style for coding?
5
In text-generation-webui, the chat styles detect code badly, also compared to ChatGPT with the colors and all, we're far from that sophistication. Koboldcpp is even worst. Are there better chat styles we can import? Or extensions?
2023-11-26T02:30:46
https://www.reddit.com/r/LocalLLaMA/comments/1840cjo/chat_style_for_coding/
DrVonSinistro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1840cjo
false
null
t3_1840cjo
/r/LocalLLaMA/comments/1840cjo/chat_style_for_coding/
false
false
self
5
null
Best 13B 4-bit Model for Code and Mathematical Reasoning?
2
I'm assuming I would be able to run it with 16G RAM and 8VRAM on an RTX 4060. I hope so at least. I'm new to this and would like to know the best one that can reasonably replicate code interpreter (advanced data analysis) from GPT-4 locally. Any suggestions would be much appreciated. Thank you!
2023-11-26T02:01:42
https://www.reddit.com/r/LocalLLaMA/comments/183zs4q/best_13b_4bit_model_for_code_and_mathematical/
lsn1000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183zs4q
false
null
t3_183zs4q
/r/LocalLLaMA/comments/183zs4q/best_13b_4bit_model_for_code_and_mathematical/
false
false
self
2
null
Share what WebUI you use to interact with your LLMs
1
[removed]
2023-11-26T01:01:57
https://www.reddit.com/r/LocalLLaMA/comments/183yla4/share_what_webui_you_use_to_interact_with_your/
iChrist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183yla4
false
null
t3_183yla4
/r/LocalLLaMA/comments/183yla4/share_what_webui_you_use_to_interact_with_your/
false
false
self
1
{'enabled': False, 'images': [{'id': 'SPXRqkA3QsirejmMZJGPZhb0DJ6XfG4tK30E7hKbZ68', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=108&crop=smart&auto=webp&s=41694e01bde6cb391067701790469366bda45a6a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=216&crop=smart&auto=webp&s=022008af5fab90229cb9ee10a932325c935ec000', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=320&crop=smart&auto=webp&s=a04b205d4579b1a1122e7a0e604707156e36b6f5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=640&crop=smart&auto=webp&s=2b19d773e972d00e30b2c2ce402dd9c84ca04463', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=960&crop=smart&auto=webp&s=d215d4f9f23abd425e6160e7f892bb7dc53806c4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=1080&crop=smart&auto=webp&s=ecd2ff5959aa85d9483d6fceef2139101ee6f0e4', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?auto=webp&s=9113d7f64e38e4fc6cbcaa74f7806d8c3f0d53da', 'width': 1280}, 'variants': {}}]}
A 15-year-old made an open-source GPT Store 🤯
1
2023-11-26T00:11:01
https://github.com/AyaanZaveri/cognite
FatSandwich1000
github.com
1970-01-01T00:00:00
0
{}
183xjhp
false
null
t3_183xjhp
/r/LocalLLaMA/comments/183xjhp/a_15yearold_made_an_opensource_gpt_store/
false
false
https://b.thumbs.redditm…MnAVb42Z1atA.jpg
1
{'enabled': False, 'images': [{'id': '6tzCpA8BQ5uOAOTwKo9UnZHfuByn2xuHdMCjX2pgGoU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DJaDMxxUrK9AUTCVGGDbnN81URg8OSShn7M7Jyi9uc0.jpg?width=108&crop=smart&auto=webp&s=daacfc40ce7f80f1249bd5d18cf20dc68f54d4dd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DJaDMxxUrK9AUTCVGGDbnN81URg8OSShn7M7Jyi9uc0.jpg?width=216&crop=smart&auto=webp&s=9527f747b197c228938b195f6556de61696cc7ea', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DJaDMxxUrK9AUTCVGGDbnN81URg8OSShn7M7Jyi9uc0.jpg?width=320&crop=smart&auto=webp&s=1eda5ab67aca85d76df48cabc08678574c4a60cd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DJaDMxxUrK9AUTCVGGDbnN81URg8OSShn7M7Jyi9uc0.jpg?width=640&crop=smart&auto=webp&s=57b9f2ee2546a33a7fcd513e656f389dcb53bc1b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DJaDMxxUrK9AUTCVGGDbnN81URg8OSShn7M7Jyi9uc0.jpg?width=960&crop=smart&auto=webp&s=e3e46dc87720660139fccf10a1deb0436cb5393c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DJaDMxxUrK9AUTCVGGDbnN81URg8OSShn7M7Jyi9uc0.jpg?width=1080&crop=smart&auto=webp&s=ff2bb8314543573a7bf57cf2207d3c982eaeaf2c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DJaDMxxUrK9AUTCVGGDbnN81URg8OSShn7M7Jyi9uc0.jpg?auto=webp&s=a0a22303edd3ba57b78a47a7107628bfee7f9db1', 'width': 1200}, 'variants': {}}]}
M1 Max with 64GB RAM vs. Intel i7 12800H, 64GB RAM, NVIDIA RTX 3080 Ti (laptop edition)?
1
[removed]
2023-11-25T23:42:33
https://www.reddit.com/r/LocalLLaMA/comments/183wyn0/m1_max_with_64gb_ram_vs_intel_i7_12800h_64gb_ram/
LLMCurious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183wyn0
false
null
t3_183wyn0
/r/LocalLLaMA/comments/183wyn0/m1_max_with_64gb_ram_vs_intel_i7_12800h_64gb_ram/
false
false
self
1
null
Protip, you can customize any pc through a usb with WinPython, OpenAI, and Langchain (sorry if this is inappropriate subreddit for this, I just want to share ideas)
1
[removed]
2023-11-25T23:26:28
https://www.reddit.com/gallery/183wmvc
MicroNuke101
reddit.com
1970-01-01T00:00:00
0
{}
183wmvc
false
null
t3_183wmvc
/r/LocalLLaMA/comments/183wmvc/protip_you_can_customize_any_pc_through_a_usb/
false
false
https://b.thumbs.redditm…5fCZiiynK2YM.jpg
1
null
I made this novel front end that does not require a web browser and brings LLMs to anywhere you can type and select text. Works great for in game translation, no need to tab out. Tell me how you like it, tell me how you want to use it, and I will make suit any need.
1
2023-11-25T23:17:43
https://github.com/aseichter2007/ClipboardConqueror
aseichter2007
github.com
1970-01-01T00:00:00
0
{}
183wgds
false
null
t3_183wgds
/r/LocalLLaMA/comments/183wgds/i_made_this_novel_front_end_that_does_not_require/
false
false
https://a.thumbs.redditm…ERT6GeLz-Qe0.jpg
1
{'enabled': False, 'images': [{'id': 'FZj7Wp8SilGmNgFAEus2cstF-4pL4PjnpY4jrDVpmFY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=108&crop=smart&auto=webp&s=952961fc8cf6b112f511b6a2042021f83dc4cb22', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=216&crop=smart&auto=webp&s=94c52d0ca8f705211d0a3c0ab43649d912ffdf83', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=320&crop=smart&auto=webp&s=438eee6b25fec6f089a9b06dece1983094b9f9aa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=640&crop=smart&auto=webp&s=9117b62c46767b4736a7624d3527bbe0f03fef8e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=960&crop=smart&auto=webp&s=7f1db43d5dcf56418ad1c82865349341538542f7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?width=1080&crop=smart&auto=webp&s=c260233bb1780dae85f27c2a689056d6c32a13f4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/z6xZOAdM2yLrP8fJm8UNSusrU1PfHOz-UhTEiZjCCGI.jpg?auto=webp&s=f380688244428ddf1079c716b2d422e38e930b13', 'width': 1200}, 'variants': {}}]}
Very basic question in regard to GGUF
3
I apologize if this is a silly question, but I'm very new to LLM's and AI. I want to run this GGUF model here: [https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter-GGUF/tree/main](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter-GGUF/tree/main) &#x200B; My question is, what are the difference between all the different versions of model under the files and versions page? Should I download on specifically or all of them? Which one should I load into Koboldcpp?
2023-11-25T21:38:37
https://www.reddit.com/r/LocalLLaMA/comments/183u8zz/very_basic_question_in_regard_to_gguf/
Mr_Hilikus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183u8zz
false
null
t3_183u8zz
/r/LocalLLaMA/comments/183u8zz/very_basic_question_in_regard_to_gguf/
false
false
self
3
{'enabled': False, 'images': [{'id': 'IE-5JwtOBcEfPvSGTaJemxzLd0RimCgVVjJXILTHmp8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PvyCVrblB3hXB-AldFbgXqg0Qt7w40R4eayzaBythEo.jpg?width=108&crop=smart&auto=webp&s=337e223e1bcc1bc1f31e55b342dcf00792369016', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PvyCVrblB3hXB-AldFbgXqg0Qt7w40R4eayzaBythEo.jpg?width=216&crop=smart&auto=webp&s=44b968192fbd3f67e7764afcbd11e7b7c2e97d20', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PvyCVrblB3hXB-AldFbgXqg0Qt7w40R4eayzaBythEo.jpg?width=320&crop=smart&auto=webp&s=6c9bc09c7284757d1ef94a7543026586f1ba2e0c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PvyCVrblB3hXB-AldFbgXqg0Qt7w40R4eayzaBythEo.jpg?width=640&crop=smart&auto=webp&s=a0e32182c253889e263fcd964711d3d7b6e58b0b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PvyCVrblB3hXB-AldFbgXqg0Qt7w40R4eayzaBythEo.jpg?width=960&crop=smart&auto=webp&s=019bc64b1469a6277677c453c4d10736aae174e5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PvyCVrblB3hXB-AldFbgXqg0Qt7w40R4eayzaBythEo.jpg?width=1080&crop=smart&auto=webp&s=edd7c017217871fb3660c0e55e3d136f2b4834b1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PvyCVrblB3hXB-AldFbgXqg0Qt7w40R4eayzaBythEo.jpg?auto=webp&s=cd81313a51e530452b31e908e9c4603756ac05b6', 'width': 1200}, 'variants': {}}]}
Multimodal error on launch
2
Hello everyone, &#x200B; I'm encountering a issue while trying to launch the Multimodal/LLaVA modelinto the text-generation-webui framework. Despite following the standard setup procedures, I'm faced with a \`KeyError: 'llava'\` during the model loading phase. &#x200B; Here's what I've tried so far: \- Changing the \`model\_type\` in the \`config.json\` of \`llava\` to \`"llama"\` as suggested in similar cases. \- Experimenting with different models and versions. all using the flags found here (https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimoda ) \- Using the following flags: \`--listen --api --chat --trust-remote-code --model liuhaotian\_llava-v1.5-7b --multimodal-pipeline llava-v1.5-7b --load-in-4bit\` &#x200B; Unfortunately, these steps haven't resolved the issue. The error occurs specifically when the \`AutoConfig.from\_pretrained\` method is called in the \`transformers\` library, and the \`llava\` key isn't found in the configuration mapping. (error below) &#x200B; I'd appreciate any insights or suggestions from the community. Has anyone else encountered this issue or have any advice on how to troubleshoot it further? 2023-11-25 14:16:42 INFO:Loading the extension "multimodal"... Traceback (most recent call last): File "D:\\text-generation-webui-main\\[server.py](https://server.py)", line 244, in <module> create\_interface() File "D:\\text-generation-webui-main\\[server.py](https://server.py)", line 142, in create\_interface extensions\_module.create\_extensions\_block() # Extensions block \^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ File "D:\\text-generation-webui-main\\modules\\[extensions.py](https://extensions.py)", line 192, in create\_extensions\_block extension.ui() File "D:\\text-generation-webui-main\\extensions\\multimodal\\[script.py](https://script.py)", line 99, in ui multimodal\_embedder = MultimodalEmbedder(params) \^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ File "D:\\text-generation-webui-main\\extensions\\multimodal\\multimodal\_embedder.py", line 27, in \_\_init\_\_ pipeline, source = load\_pipeline(params) \^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ File "D:\\text-generation-webui-main\\extensions\\multimodal\\pipeline\_loader.py", line 34, in load\_pipeline model\_name = shared.args.model.lower() \^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ AttributeError: 'NoneType' object has no attribute 'lower' Press any key to continue . . .
2023-11-25T21:31:48
https://www.reddit.com/r/LocalLLaMA/comments/183u3l6/multimodal_error_on_launch/
psdwizzard
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183u3l6
false
null
t3_183u3l6
/r/LocalLLaMA/comments/183u3l6/multimodal_error_on_launch/
false
false
self
2
null
Any M2 ultra reviews?
5
So I'm considering getting a good LLM rig, and the M2 Ultra seems to be a good option for large memory, with much lower power usage/heat than 2 to 8 3090s or 4090s, albeit with lower speeds. I want to know if anyone is using one, and what it's like. I've read that it is less supported by software which could be an issue. Also, is it good for Stable Diffusion? Another question is about memory and context length. Does a big memory let you increase the context length with smaller models where the parameters don't fill the memory? I feel a big context would be useful for writing books and things. Is there anything else to consider? Thanks.
2023-11-25T21:29:36
https://www.reddit.com/r/LocalLLaMA/comments/183u1sn/any_m2_ultra_reviews/
EvokerTCG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183u1sn
false
null
t3_183u1sn
/r/LocalLLaMA/comments/183u1sn/any_m2_ultra_reviews/
false
false
self
5
null
🚀 Launching SauerkrautLM-7b-HerO: A New Era in German Language Modeling!
41
&#x200B; [SauerkrautLM-7b-Hero](https://preview.redd.it/n01vz6la2k2c1.png?width=1280&format=png&auto=webp&s=3cd3e3e4ba4daffd6b3d7a7d134a20687bab48fa) 🎉 Exciting news in the world of AI language models! Introducing **SauerkrautLM-7b-HerO**, a groundbreaking German language model that's set to redefine bilingual language processing. Find all the details on Huggingface: [https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) Developed by merging **Teknium's OpenHermes-2.5-Mistral-7B** and **Open-Orca's Mistral-7B-OpenOrca**, this model isn't just any ordinary merged language model. It's been uniquely fine-tuned using the *Sauerkraut dataset*, a rich and varied source of German language data. What makes SauerkrautLM-7b-HerO stand out? Here's the scoop: * **Optimal Balance**: By integrating extensive German data with essential international sources, we've created a model that excels in understanding the nuances of the German language without compromising its global capabilities. * **Innovative Technology**: Utilizing the gradient SLERP method from MergeKit, we've seamlessly fused two of the most advanced 7B models based on the Mistral framework. This blend brings together the best features of both models, creating an unmatched synergy. * **Cultural and Linguistic Mastery**: The incorporation of the German Sauerkraut dataset, a unique mix of augmented and translated data, empowers the model to master the intricacies of the German language. This was achieved without the usual loss of core competencies that often comes with fine-tuning non-German models in German. * **Bilingual Proficiency**: Our approach ensures that SauerkrautLM-7b-HerO not only retains its original strengths but also gains a profound understanding of German. This sets a new benchmark in bilingual language model proficiency. This isn't just a step forward in language modeling; it's a leap into a future where AI understands and communicates in German as naturally as it does in English without the need of resource extensive German Foundation Models. 🔍 **What are your thoughts on this new development? Let's discuss in the comments!** A brief review of relevant benchmarks performed with the new SauerkrautLM-7b-HerO model (more benchmarks on huggingface): &#x200B; [MT-Bench German](https://preview.redd.it/lw3yux6p2k2c1.png?width=2681&format=png&auto=webp&s=cb6f0f79cbbd694f528d3effd933e44cac7a50fd) &#x200B; [MT-Bench English](https://preview.redd.it/lfx9jptb3k2c1.png?width=2731&format=png&auto=webp&s=4cf6678aae8336584560ba7e5e5de6f41baa2c8c)
2023-11-25T20:48:55
https://www.reddit.com/r/LocalLLaMA/comments/183t5k9/launching_sauerkrautlm7bhero_a_new_era_in_german/
AffectionateCan2342
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183t5k9
false
null
t3_183t5k9
/r/LocalLLaMA/comments/183t5k9/launching_sauerkrautlm7bhero_a_new_era_in_german/
false
false
https://b.thumbs.redditm…UgLRKU-6W8UA.jpg
41
{'enabled': False, 'images': [{'id': 'ENW23XEJb0OIJEbIaypmkse9uF30CMKhJjNB6dTrJBo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/rSTUn0L2YGXcD-Z7606mWZSVLogbctZbp2oLBjf2FgE.jpg?width=108&crop=smart&auto=webp&s=9f3eafa7cc832e45d1670ac762e8916f6204a4c6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/rSTUn0L2YGXcD-Z7606mWZSVLogbctZbp2oLBjf2FgE.jpg?width=216&crop=smart&auto=webp&s=d62273ffcde316b4b511994061147fd355bc1289', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/rSTUn0L2YGXcD-Z7606mWZSVLogbctZbp2oLBjf2FgE.jpg?width=320&crop=smart&auto=webp&s=4f7f6589f65bef7d69acb40a6ccdf4cdf44656d1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/rSTUn0L2YGXcD-Z7606mWZSVLogbctZbp2oLBjf2FgE.jpg?width=640&crop=smart&auto=webp&s=8cf9a9e894256ab35c5e087e99ce4b4883a6915a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/rSTUn0L2YGXcD-Z7606mWZSVLogbctZbp2oLBjf2FgE.jpg?width=960&crop=smart&auto=webp&s=43894d553a424d7c37f45509b283f3b71a86f5ff', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/rSTUn0L2YGXcD-Z7606mWZSVLogbctZbp2oLBjf2FgE.jpg?width=1080&crop=smart&auto=webp&s=1e051f0b58a2d26f6e595e76e5447a1d504e3bf8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/rSTUn0L2YGXcD-Z7606mWZSVLogbctZbp2oLBjf2FgE.jpg?auto=webp&s=0e6bc665d03208ab203f120448c2a95ed052debc', 'width': 1200}, 'variants': {}}]}
Any tips or reccomended 7b and 13b models casual and simple language? For non native speakers.
8
I am a brasilian and english is not my main language, up to now my solution to bring the text complexity dowm has been presets like ouroboros, with low repetition penalty(1.05 up to 1.1) with high temperature. I find that some models with more human written training have simpler English, however i wanted some tips, like models or setting on oobabooga to bring down text complexity without having a overly repetitive erp or story. Thank you for your comprehension.
2023-11-25T20:27:17
https://www.reddit.com/r/LocalLLaMA/comments/183sp07/any_tips_or_reccomended_7b_and_13b_models_casual/
3750gustavo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183sp07
false
null
t3_183sp07
/r/LocalLLaMA/comments/183sp07/any_tips_or_reccomended_7b_and_13b_models_casual/
false
false
self
8
null
Share your favorite frontend to use LLMs
2
I have experimented with the following projects: [https://github.com/huggingface/chat-ui](https://github.com/huggingface/chat-ui) [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) [https://github.com/ParisNeo/lollms-webui/](https://github.com/ParisNeo/lollms-webui/) [https://github.com/h2oai/h2ogpt](https://github.com/h2oai/h2ogpt) [https://github.com/SillyTavern/SillyTavern](https://github.com/SillyTavern/SillyTavern) [https://github.com/NimbleBoxAI/ChainFury](https://github.com/NimbleBoxAI/ChainFury) [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all) [https://github.com/imartinez/privateGPT](https://github.com/imartinez/privateGPT) &#x200B; Really love them and wondering if there are any other great projects, Some of them include full web search and PDF integrations, some are more about characters, or for example oobabooga is the best at trying every single model format there is. &#x200B; Share your findings! &#x200B;
2023-11-25T20:24:49
https://www.reddit.com/r/LocalLLaMA/comments/183sn63/share_your_favorite_frontend_to_use_llms/
iChrist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183sn63
false
null
t3_183sn63
/r/LocalLLaMA/comments/183sn63/share_your_favorite_frontend_to_use_llms/
false
false
self
2
{'enabled': False, 'images': [{'id': 'SPXRqkA3QsirejmMZJGPZhb0DJ6XfG4tK30E7hKbZ68', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=108&crop=smart&auto=webp&s=41694e01bde6cb391067701790469366bda45a6a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=216&crop=smart&auto=webp&s=022008af5fab90229cb9ee10a932325c935ec000', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=320&crop=smart&auto=webp&s=a04b205d4579b1a1122e7a0e604707156e36b6f5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=640&crop=smart&auto=webp&s=2b19d773e972d00e30b2c2ce402dd9c84ca04463', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=960&crop=smart&auto=webp&s=d215d4f9f23abd425e6160e7f892bb7dc53806c4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?width=1080&crop=smart&auto=webp&s=ecd2ff5959aa85d9483d6fceef2139101ee6f0e4', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/gUu5ykUgNxlLs35R-tlMMuDCW5pxV-MQ87NhIKjhUY8.jpg?auto=webp&s=9113d7f64e38e4fc6cbcaa74f7806d8c3f0d53da', 'width': 1280}, 'variants': {}}]}
Script/repo for training any open source model on unstructured documents (text, pdf, markdown, etc...)
1
[removed]
2023-11-25T20:15:09
https://www.reddit.com/r/LocalLLaMA/comments/183sfpc/scriptrepo_for_training_any_open_source_model_on/
Emergency-Sir6270
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183sfpc
false
null
t3_183sfpc
/r/LocalLLaMA/comments/183sfpc/scriptrepo_for_training_any_open_source_model_on/
false
false
self
1
null
LLM for writing WebAssembly Code?
1
Are there any recommendations on which LLM to use for writing WebAssembly Code?
2023-11-25T20:01:50
https://www.reddit.com/r/LocalLLaMA/comments/183s50d/llm_for_writing_webassembly_code/
eachcitizen100
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183s50d
false
null
t3_183s50d
/r/LocalLLaMA/comments/183s50d/llm_for_writing_webassembly_code/
false
false
self
1
null
What are you using as an Alexa/Google LLm alternative?
6
#### LLM Home Assistant I've been hosting an ollama server and was wondering if any of you found a way to connect it to your smart house? An Alexa substitute if you will.
2023-11-25T19:59:01
https://www.reddit.com/r/LocalLLaMA/comments/183s2iz/what_are_you_using_as_an_alexagoogle_llm/
BlankCrystal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183s2iz
false
null
t3_183s2iz
/r/LocalLLaMA/comments/183s2iz/what_are_you_using_as_an_alexagoogle_llm/
false
false
self
6
null
Llama-2 model with Milvus
1
I have some extracted data from specific documents which is stored as vector embeddings in Milvus Database. Now i wanna know how to use a LLAMA-2 model to give response based on the document’s data(stored in milvus) . How can the vector embeddings be combined with Llama to give responses related to the documents. Please help me with this 🙏
2023-11-25T19:51:49
https://www.reddit.com/r/LocalLLaMA/comments/183rx2w/llama2_model_with_milvus/
shalinbhanot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183rx2w
false
null
t3_183rx2w
/r/LocalLLaMA/comments/183rx2w/llama2_model_with_milvus/
false
false
self
1
null
Help me get llama running on a dual-socket, 8-channel system. (2x8)
4
I know this may be a lot to ask, but if there are any people interested in seeing how fast this can go, please help me out with your ideas here. &#x200B; So hardware-wise what I got is an H11DSI-NT dual socket SP3 motherboard, bios version 2.1 with 2 32-core EPYC 7502s and all 16 memory slots populated by 256GB (128GB per socket) of 3200MT/s DDR4 sticks certified for this board by Supermicro, totaling 8 memory channels on each of the 2 sockets. &#x200B; For BIOS set-up I have read the motherboard manual, AMD implementation and tuning guides. As suggested for high memory bandwidth HPC (subtype CFD) workloads, I disabled SMT (Hyperthreading), set the NUMA nodes per socket to 4 (NPS4), locked memory speed at 3200 to prevent matching the NB at 2933, and the 4-link xGMI speed from 10 to 16Gbps (max 18). &#x200B; For the OS I have installed Clear Linux, as supposedly it comes with OOTB tuning for HPC workloads. So far I've had some problems however. The stateless config isn't so intuitive, which would be alright if not for the documentation being actually wrong in some places, possibly outdated. I've researched some configuration options to improve performance but none seemed to have a positive effect in my preliminary testing, so I have not applied any permanently. The OS is mostly stock for now. &#x200B; How well does it actually work? Works as expected as far as I can tell. The main reason I got this rig is rendering and it rips, so that's fine. As for memory, after limited tweaking here are the results of my STREAM benchmark: &#x200B; STREAM version $Revision: 5.10 $ This system uses 8 bytes per array element. Array size = 10000000 (elements), Offset = 0 (elements) Memory per array = 76.3 MiB (= 0.1 GiB). Total memory required = 228.9 MiB (= 0.2 GiB). Number of Threads requested = 64 Number of Threads counted = 64 Your clock granularity/precision appears to be 1 microseconds. Each test below will take on the order of 514 microseconds. (= 514 clock ticks) Function Best Rate MB/s Avg time Min time Max time Copy: 2867900.2 0.000067 0.000056 0.000122 Scale: 2761681.6 0.000065 0.000058 0.000100 Add: 3078388.3 0.000081 0.000078 0.000086 Triad: 3078388.3 0.000094 0.000078 0.000199 Pretty good I think. Better than what the openSUSE guide for HPC I read promised at 250, but not as good as \[AMD's technical tuning guide\]([https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/tuning-guides/amd-epyc-7002-tg-hpc-56827.pdf](https://www.amd.com/content/dam/amd/en/documents/epyc-technical-docs/tuning-guides/amd-epyc-7002-tg-hpc-56827.pdf)) which promised 350. However this appears highly variable for me, the result above is merely the best of 6, while the worst of 6 was a pathetic 1/3rd of it. No idea if that means anything. &#x200B; Finally, for actual inference I got ggreganov/llama.cpp and lostruins/koboldcpp. I must say I am not at all familiar with running on CPU as all of my usage until now was done on my GPU system, this one has no GPUs (beyond the motherboard VGA controller). &#x200B; So far I've tried 2 models, TheBloke/nous-capybara-34b.Q4\_K\_M.gguf (Yi model), and Sao10K/Euryale-1.4-L2-70B.q5\_K\_S.gguf (llama2). The speed has been thoroughly disappointing, with the fastest result being: &#x200B; llama\_print\_timings: load time = 15986.07 ms llama\_print\_timings: sample time = 66.48 ms / 100 runs ( 0.66 ms per token, 1504.28 tokens per second) llama\_print\_timings: prompt eval time = 848.76 ms / 20 tokens ( 42.44 ms per token, 23.56 tokens per second) llama\_print\_timings: eval time = 38684.95 ms / 99 runs ( 390.76 ms per token, 2.56 tokens per second) llama\_print\_timings: total time = 39667.94 ms Log end As a result of: &#x200B; ./main -m ../Nous-Capybara-34B-Q4\_K\_M.gguf -p "An extremely detailed description of the 10 best ethnic dishes will follow, with recipes: " -n 100 -t 32 -tb 64 --mlock --no-mmap --numa The 70B model runs at about half that, best case. &#x200B; I can download other models, software, apply any BIOS tweaks and OS settings, compile with whatever flags, EVs and hacks if necessary. &#x200B; Thank you if you read this far.
2023-11-25T19:26:20
https://www.reddit.com/r/LocalLLaMA/comments/183rd14/help_me_get_llama_running_on_a_dualsocket/
MindlessEditor2762
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183rd14
false
null
t3_183rd14
/r/LocalLLaMA/comments/183rd14/help_me_get_llama_running_on_a_dualsocket/
false
false
self
4
null
Maybe anecdotal but I have very high hopes for Yi 34b finetunes.
71
I tried one yesterday for the first time and I came away very impressed. It's not great (yet) for RP but the basis seems very robust. For a laugh, I tried the usual reasoning tests and damn, it passed them all first try: > **AI** > >How can I help you today? > >**You** > >There is a room with three murderers inside. Another person enters the room and shoots one of the three original occupants dead. Nobody else enters or leaves the room. How many murderers are there in the room? > >**AI** > >Four. Three original occupants who were already murderers and the fourth being the new person who entered the room and killed one of them.</s> > >**You** > >I buy ten apples at the market. I give two of them to my friend Bob and two to my mother. Then, I go back to the market and buy three more apples. On my way home, I eat one apple. At home, I give my brother three bananas. How many apples do I have left. Explain your reasoning step by step. > >**AI** > >Let's break it down step-by-step: > >You start with 10 apples. > >You give away 2 apples to Bob (leaving you with 8 apples) and then give away another 2 apples to your mother (leaving you with 6 apples). > >Next, you purchase 3 additional apples from the market, bringing your total up to 9 apples again. > >While on your way home, you consume 1 apple, leaving you with 8 apples remaining. > >Finally, when you reach home, you give your brother 3 bananas - but this doesn't affect the number of apples you have, since they are different fruits. So, you still have 8 apples left.</s> > >**You** > >Sally has five brothers. Each of her brothers has one sister. How many sisters does Sally have. Explain youur reasoning step by step. > >**AI** > >Here's how we can reason through this problem: > >Sally has five brothers. > >It states that each of her brothers has one sister. > >Since all these siblings belong to the same family, it means that Sally is the only sister mentioned for those five brothers. > >Therefore, Sally has no other sisters besides herself. > >So, Sally has 0 sisters </s> Like, seriously? Huge context, really fast (on my 3090 anyway) and this level of performance. I can't wait to see what people can do with this.
2023-11-25T18:55:24
https://www.reddit.com/r/LocalLLaMA/comments/183qob8/maybe_anecdotal_but_i_have_very_high_hopes_for_yi/
Herr_Drosselmeyer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183qob8
false
null
t3_183qob8
/r/LocalLLaMA/comments/183qob8/maybe_anecdotal_but_i_have_very_high_hopes_for_yi/
false
false
self
71
null
Generate text summary using Llama.cpp with LORA fine tuning.
2
I m running my llama 2 model locally using CPU 32 GB RAM and 12 cores as per this link that was shared earlier in another post. [https://rentry.org/cpu-lora](https://rentry.org/cpu-lora). i have followed all the instructions in that link and able to get it working although i m able to get my model to talk like shakespeare based on the training data shared , i would like to ask my model to generate a summary in say 100 words or 200 words of the training content. is this possible to achieve using llama.cpp and LORA fine tuning or is there any other elegant way to do this locally as well. &#x200B; &#x200B;
2023-11-25T18:28:34
https://www.reddit.com/r/LocalLLaMA/comments/183q2sv/generate_text_summary_using_llamacpp_with_lora/
leaderof13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183q2sv
false
null
t3_183q2sv
/r/LocalLLaMA/comments/183q2sv/generate_text_summary_using_llamacpp_with_lora/
false
false
self
2
null
I found out Laptop 3080 Ti has 16GB VRAM GDDR6 while desktop 3080 Ti has 12GB GDDR6X, what's better?
12
Title sums it up.
2023-11-25T18:14:16
https://www.reddit.com/r/LocalLLaMA/comments/183prnf/i_found_out_laptop_3080_ti_has_16gb_vram_gddr6/
hysterian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183prnf
false
null
t3_183prnf
/r/LocalLLaMA/comments/183prnf/i_found_out_laptop_3080_ti_has_16gb_vram_gddr6/
false
false
self
12
null
System Configuration. (AMD / NVIDIA)
1
Hello all. I'm getting my rig ready to dive into some machine learning and need a bit of advice. After hanging around here for a bit, it looks like if you're rocking an AMD 7900XT, Windows might be your best bet instead of Ubuntu. I'm really just looking for some first hand information about the tools you are using and what you are doing with the system specifications you have. &#x200B; Currently I'm running linux with a 5800x3d, 7900xt and 32GB.
2023-11-25T18:11:05
https://www.reddit.com/r/LocalLLaMA/comments/183pp4x/system_configuration_amd_nvidia/
RelevantFoundation14
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183pp4x
false
null
t3_183pp4x
/r/LocalLLaMA/comments/183pp4x/system_configuration_amd_nvidia/
false
false
self
1
null
Insights on Hyperparameters for Large Models?
5
I just watched Andrej Karpathy's latest video and it got me thinking – have we reached a point where there's a standard, go-to setup for hyperparameters models for the different parameter count? I'm curious whether there's a consensus in the community or if people are still experimenting with different configurations. Specifically, are most 7B models (llama2 vs Mistral) set up in a similar way now, or is there still a lot of variation in terms of layers, attention heads, and context lengths? Has anyone tried pre-training with longer context lengths 100k+ or is it still around 2000/8000 range? I would appreciate any insights or references to recent discussions or papers on this topic. Thanks a lot!
2023-11-25T18:10:26
https://www.reddit.com/r/LocalLLaMA/comments/183pom7/insights_on_hyperparameters_for_large_models/
IndividualAd1648
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183pom7
false
null
t3_183pom7
/r/LocalLLaMA/comments/183pom7/insights_on_hyperparameters_for_large_models/
false
false
self
5
null
llama.cpp token probabilities being dumped using introspection
2
[removed]
2023-11-25T18:06:53
https://www.reddit.com/gallery/183pluk
introsp3ctor
reddit.com
1970-01-01T00:00:00
0
{}
183pluk
false
null
t3_183pluk
/r/LocalLLaMA/comments/183pluk/llamacpp_token_probabilities_being_dumped_using/
false
false
https://b.thumbs.redditm…TsJ3l8p2qHro.jpg
2
null
First attempts to reproduce Q* learning seem to be successful
1
2023-11-25T17:19:38
https://www.reddit.com/r/agi/comments/1832qyy/first_attempts_to_reproduce_q_learning_seem_to_be/
xSNYPSx
reddit.com
1970-01-01T00:00:00
0
{}
183oke8
false
null
t3_183oke8
/r/LocalLLaMA/comments/183oke8/first_attempts_to_reproduce_q_learning_seem_to_be/
false
false
https://b.thumbs.redditm…4YZ4O-TDCvwY.jpg
1
{'enabled': False, 'images': [{'id': 'CS48HCmNdGoQO-FhBjyfYF2Gu5G91xzPZJRfmusBZVw', 'resolutions': [{'height': 133, 'url': 'https://external-preview.redd.it/0PLjjRa_oSRqBCHVa90XdVhv4uyqmPrzu0jb3_Y4pek.jpg?width=108&crop=smart&auto=webp&s=ad1e7adff37741cbb0bd1da95975b37049fd7501', 'width': 108}, {'height': 266, 'url': 'https://external-preview.redd.it/0PLjjRa_oSRqBCHVa90XdVhv4uyqmPrzu0jb3_Y4pek.jpg?width=216&crop=smart&auto=webp&s=49db1157a890662a11f2530edf1f9cc7c04eede7', 'width': 216}, {'height': 394, 'url': 'https://external-preview.redd.it/0PLjjRa_oSRqBCHVa90XdVhv4uyqmPrzu0jb3_Y4pek.jpg?width=320&crop=smart&auto=webp&s=5b35fcf9d38bde80778aa96199aa9950bf1a7611', 'width': 320}, {'height': 789, 'url': 'https://external-preview.redd.it/0PLjjRa_oSRqBCHVa90XdVhv4uyqmPrzu0jb3_Y4pek.jpg?width=640&crop=smart&auto=webp&s=7f084440293d565ecd75016dd2a49a005da1a737', 'width': 640}], 'source': {'height': 971, 'url': 'https://external-preview.redd.it/0PLjjRa_oSRqBCHVa90XdVhv4uyqmPrzu0jb3_Y4pek.jpg?auto=webp&s=86af445690d16ef0087fa69d104b42d8c0689884', 'width': 787}, 'variants': {}}]}
How can I eliminate generic AI responses?
26
I use Open Hermes 2.5 7b, and I'm very happy with the performance overall. I tried to add the following context: 'Avoid using phrases like 'What can I help you with today?' or 'How can I assist you?' 'as ai' in your responses. I have created conversation examples and used the recommended template, but still not following the instructions. There is a way to get rid of that kind of responses? thank you in advance.
2023-11-25T16:40:55
https://www.reddit.com/r/LocalLLaMA/comments/183noze/how_can_i_eliminate_generic_ai_responses/
Asleep_Aerie_4591
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183noze
false
null
t3_183noze
/r/LocalLLaMA/comments/183noze/how_can_i_eliminate_generic_ai_responses/
false
false
self
26
null
China is retrofitting consumer RTX4090s with 2 slot blower for ML
92
This is the reason why you can't find ones in your local best buy. They are paying premium for it. But it indeed is very helpful, if I can get my hand on a few for my build.
2023-11-25T16:21:56
https://www.tomshardware.com/news/chinese-factories-add-blowers-to-old-rtx-4090-cards
--dany--
tomshardware.com
1970-01-01T00:00:00
0
{}
183na9z
false
null
t3_183na9z
/r/LocalLLaMA/comments/183na9z/china_is_retrofitting_consumer_rtx4090s_with_2/
false
false
https://b.thumbs.redditm…v3QO_zrs-zCk.jpg
92
{'enabled': False, 'images': [{'id': '2lminU1r8JpIBN6XJ--aO3Nz46jCAPdiQWsXFsbV1R4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/kiYa92A1raGxjLD6pA3dz2M9fa6XlETcfSDbR575RX0.jpg?width=108&crop=smart&auto=webp&s=2302bd540af94ea0cba06ae28f66227d02cd4607', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/kiYa92A1raGxjLD6pA3dz2M9fa6XlETcfSDbR575RX0.jpg?width=216&crop=smart&auto=webp&s=e71cffe9b85154a4dbe1229b1f9a730f6e9e1608', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/kiYa92A1raGxjLD6pA3dz2M9fa6XlETcfSDbR575RX0.jpg?width=320&crop=smart&auto=webp&s=468fdb75215c5b50c4c93bf77a97f8c9d0c8a1f4', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/kiYa92A1raGxjLD6pA3dz2M9fa6XlETcfSDbR575RX0.jpg?width=640&crop=smart&auto=webp&s=c1f782c11a24c6d244d6ee529670998663142f51', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/kiYa92A1raGxjLD6pA3dz2M9fa6XlETcfSDbR575RX0.jpg?width=960&crop=smart&auto=webp&s=83199cb0d78ec9de94b3afc6ef556924f4fe66cc', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/kiYa92A1raGxjLD6pA3dz2M9fa6XlETcfSDbR575RX0.jpg?width=1080&crop=smart&auto=webp&s=e0948657c78c9e443517c13839925ead8579f485', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/kiYa92A1raGxjLD6pA3dz2M9fa6XlETcfSDbR575RX0.jpg?auto=webp&s=40a19336505c6f2ca4936b1c515b2499a3a2e713', 'width': 1200}, 'variants': {}}]}
LLM for object detection/labelling
3
Hello, I'm looking for an alternative to Google Vision AI (LABEL\_DETECTION, OBJECT\_LOCALIZATION) and Amazon Rekognition (DetectLabels). Any ideas? Thanks!
2023-11-25T16:15:27
https://www.reddit.com/r/LocalLLaMA/comments/183n5a7/llm_for_object_detectionlabelling/
takezo07
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183n5a7
false
null
t3_183n5a7
/r/LocalLLaMA/comments/183n5a7/llm_for_object_detectionlabelling/
false
false
self
3
null
Was having issues in Text Gen UI. Are there any settings, model types, I need to be aware of to run a 70b model in Text Gen Ui with a RTX 4090, i9-13900K, and 128gb DDR5 ram?
1
[removed]
2023-11-25T15:54:48
https://www.reddit.com/r/LocalLLaMA/comments/183mp0n/was_having_issues_in_text_gen_ui_are_there_any/
Brad12d3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183mp0n
false
null
t3_183mp0n
/r/LocalLLaMA/comments/183mp0n/was_having_issues_in_text_gen_ui_are_there_any/
false
false
self
1
null
Anyone have a 1B or 3B model that is mostly coherent?
23
I've tried a few of these models but it was some months ago. Have y'all seen any that can hold a conversation yet?
2023-11-25T15:17:58
https://www.reddit.com/r/LocalLLaMA/comments/183lwaw/anyone_have_a_1b_or_3b_model_that_is_mostly/
multiverse_fan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183lwaw
false
null
t3_183lwaw
/r/LocalLLaMA/comments/183lwaw/anyone_have_a_1b_or_3b_model_that_is_mostly/
false
false
self
23
null
I have some questions
6
So for background I've had some interest in LLMs and other AI for a year or so. I've used online LLMs like ChatGPT but haven't tried running my own due to 10 year old hardware. I'm considering getting a new PC and want to know whether to splash for one that can do high end LLM stuff. I've read up a fair bit but have some questions that hopefully aren't too stupid. 1.) It looks like VRAM is the biggest hardware limit for model size. What are some good hardware options at different price points? Are there really expensive options that blow consumer stuff out of the water? Is now a good time to buy or is there something worth waiting for? 2.) Open source models seem to be dependent on the trainers giving away their expensively acquired work. Are you anticipating model releases to replace LLAMA2, and when? 3.) Is retraining or fine tuning possible for ordinary users? Is this meaningfully different from having a 'mission' or instruction set added to the beginning of each prompt/context? 3.) I think I understand parameter size and compression, but what determines the token context size a model can handle? GPT4s new massive context size is very handy. 4.) I'm interested in 'AutoGPT' type systems (or response + validation etc). Can this work in series mode, where you only have 1 model running a time? It seems like having specialised models could be useful. Would loading different models most suited to each particular 'subroutine' slow things down a lot? Are these systems difficult to set up or is it just a matter of feeding the output of one query into the input of the next (while adding on previous relevant context). 5.) Is the same type of hardware setup good for both LLMs and Stable Diffusion, or do they have separate setups for good bang/buck? Many thanks to anyone who can help!
2023-11-25T15:13:58
https://www.reddit.com/r/LocalLLaMA/comments/183ltf0/i_have_some_questions/
EvokerTCG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183ltf0
false
null
t3_183ltf0
/r/LocalLLaMA/comments/183ltf0/i_have_some_questions/
false
false
self
6
null
Help for the n00b? Optimal loader parameters...
3
Well, not a total n00b as I play with LLM for almost year and a half now, but with local LLMs since summer. Although I have a profound experience with local image generators I thought I can use some of this knowledge with setting LLMs although it doesn't seem to be that easy ;) Any input that will shed some light on the problems I have will be greatly appreciated :) **Hardware:** Ryzen 9 3900X, 48GB RAM, RTX 4090 **Oobabooga startup params:** \--load-in-8bit --auto-devices --gpu-memory 23 --cpu-memory 42 --auto-launch --listen I still have a problem getting around some issues, likely caused by improper loader settings. I'm looking for some tips how to set them optimally. I use oobabooga UI as it's the most comfortable for me and lets me test models before deploying them elsewhere (ie. to company UIs - I'm working on a chatbot connected to a vector db for local document storage and I thought about ooba as a backend for quick loading of models and setting parameters and exposing them via API) however It's documentation is vague and I have a feeling that names for the parameters and so on are not standarized too. **GENERAL:** I find that pretty much all models generate way shorter answers than the GPT4 o r even GPT3.5 - while it's not always desirable to have a long answer (and some models tend to be very descriptive where 3-5 words would be enough) ie. for code generation it is. Is it possible to make the answers longer? Accuracy is more important than the speed although I prefer to stick with the GPU instead of CPU, so I guess I'm limited to 13B-23B models right now (?) - but I find them pretty good anyway, I'm not sure if jumping to 70B models would really improve the quality... **LOADERS:** First thing - which model loader is optimal? I wanted to try same parameter count model (ie 13B) but better precision (ie. 8bit 32g vs 4bit 32g) but I find it impossible to load (error message about improper shape). In the **ExLlamav2\_HF** for example I can set: *max\_seq\_len* \- by default it should be 8k for llamav2/vicuna/alpaca based models, right? But I have trouble to verify it in the model description's at theBloke's hugginface (what I'm missing?) *compress\_pos\_emb/alpha\_value* \- my understanding is that it allows model to run with longer context than model's default, right? But at th same time it lowers the answer quality I thing - ie. setting max\_seq\_lenght to 16384 and compress\_pos\_emb causes values with repeating number to come out wrong - for example if there should be 31337 in the answer - model will return 3137. Asked about year 1990 it will return 190 and so on... - is it expected behaviour? *cfg-cache, no\_flash\_attn, cache\_8bit, no\_use\_fast* \- these seem to do nothing or further (cache\_8bit) degrade the answer quality - is this expected behaviour? As for the **AutoGPTQ** it's even more of a mystery to me, and usually it won't load the model at all... here are parameters I use: *gpu-memory in MiB for device:* 22000 *cpu-memory in MiB:* 36864 *wbits* \-> ??? *groupsize* \-> ??? *triton:* off *no\_inject\_fues\_attention:* off *no\_inject\_fused\_mlp:* off *no\_use\_cuda\_fp16:* off *desc\_act:* on *cpu:* off *auto-devices:* on *disk:* off *trust-remote-code:* off *disable\_exllama:* on *no\_use\_fast:* off
2023-11-25T15:01:19
https://www.reddit.com/r/LocalLLaMA/comments/183ljnd/help_for_the_n00b_optimal_loader_parameters/
AirwolfPL
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183ljnd
false
null
t3_183ljnd
/r/LocalLLaMA/comments/183ljnd/help_for_the_n00b_optimal_loader_parameters/
false
false
self
3
null
RWKV v5 7b, Fully Open-Source, 60% trained, approaching Mistral 7b in abilities or surpassing it.
148
So RWKV 7b v5 is 60% trained now, saw that multilingual parts are better than mistral now, and the english capabilities are close to mistral, except for hellaswag and arc, where its a little behind. all the benchmarks are on rwkv discor, and you can google the pro/cons of rwkv, though most of them are v4. Thoughts?
2023-11-25T14:48:43
https://www.reddit.com/r/LocalLLaMA/comments/183lal3/rwkv_v5_7b_fully_opensource_60_trained/
vatsadev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183lal3
false
null
t3_183lal3
/r/LocalLLaMA/comments/183lal3/rwkv_v5_7b_fully_opensource_60_trained/
false
false
self
148
null
I'm about to open source my Flutter / Dart plugin to run local inference on all major platforms. See how it runs on my personal Apple devices: macOS (Intel & M1), iOS, iPadOS. Next up: Android, Linux & Windows. AMA.
27
2023-11-25T14:42:11
https://v.redd.it/423rfe3d9i2c1
BrutalCoding
v.redd.it
1970-01-01T00:00:00
0
{}
183l5z5
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/423rfe3d9i2c1/DASHPlaylist.mpd?a=1703515347%2CZWY3ZGE3NDE1YmI1MGFjNmZjNDg5MTI0ZDdhODFhNTFkNDJiMTg2ZTM5ODg5Y2MxNGI5YmUxYzM3YzkwYTJmZg%3D%3D&v=1&f=sd', 'duration': 52, 'fallback_url': 'https://v.redd.it/423rfe3d9i2c1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/423rfe3d9i2c1/HLSPlaylist.m3u8?a=1703515347%2CYjgzMTFhZTA4M2ViNWVmN2MzZDFkNDVmMTA2OWQ0MmYwMjAzOTFhMzM1ZDIyZTY0MTg1ZGJiOTUxZWRkMzMzOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/423rfe3d9i2c1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1410}}
t3_183l5z5
/r/LocalLLaMA/comments/183l5z5/im_about_to_open_source_my_flutter_dart_plugin_to/
false
false
https://external-preview…28601da07b859227
27
{'enabled': False, 'images': [{'id': 'NXlxczVkZ2NhaTJjMbOVW0lJL_QpTQq6zutSNQtPhIhjDoXoFAAnAggInWTX', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/NXlxczVkZ2NhaTJjMbOVW0lJL_QpTQq6zutSNQtPhIhjDoXoFAAnAggInWTX.png?width=108&crop=smart&format=pjpg&auto=webp&s=22ac3d8882747d56c97cad2faa2fc35f8f0633d7', 'width': 108}, {'height': 165, 'url': 'https://external-preview.redd.it/NXlxczVkZ2NhaTJjMbOVW0lJL_QpTQq6zutSNQtPhIhjDoXoFAAnAggInWTX.png?width=216&crop=smart&format=pjpg&auto=webp&s=83d3f79da121394aaf56f09f609d4847f52176e3', 'width': 216}, {'height': 244, 'url': 'https://external-preview.redd.it/NXlxczVkZ2NhaTJjMbOVW0lJL_QpTQq6zutSNQtPhIhjDoXoFAAnAggInWTX.png?width=320&crop=smart&format=pjpg&auto=webp&s=50f953223eadfa76e509b7c1fd45632000a97429', 'width': 320}, {'height': 489, 'url': 'https://external-preview.redd.it/NXlxczVkZ2NhaTJjMbOVW0lJL_QpTQq6zutSNQtPhIhjDoXoFAAnAggInWTX.png?width=640&crop=smart&format=pjpg&auto=webp&s=1a586c0aea6d4f36861b334ce591d87c713d9061', 'width': 640}, {'height': 734, 'url': 'https://external-preview.redd.it/NXlxczVkZ2NhaTJjMbOVW0lJL_QpTQq6zutSNQtPhIhjDoXoFAAnAggInWTX.png?width=960&crop=smart&format=pjpg&auto=webp&s=76bce8ccc293e8cd3cae4c4772f84c19936c491f', 'width': 960}, {'height': 826, 'url': 'https://external-preview.redd.it/NXlxczVkZ2NhaTJjMbOVW0lJL_QpTQq6zutSNQtPhIhjDoXoFAAnAggInWTX.png?width=1080&crop=smart&format=pjpg&auto=webp&s=fa86c2a50e7636be8fac9bc9358d4779d4663100', 'width': 1080}], 'source': {'height': 1332, 'url': 'https://external-preview.redd.it/NXlxczVkZ2NhaTJjMbOVW0lJL_QpTQq6zutSNQtPhIhjDoXoFAAnAggInWTX.png?format=pjpg&auto=webp&s=4d1f8303f91a3a3081ae63573fd1729d8fa4958d', 'width': 1740}, 'variants': {}}]}
Fine tune LLAMA to adapt to certain text style
4
I am wondering if there is a way to fine tune a LLAMA model on unlabelled data. I have a txt dump filled with conversations, and would like the model to generate response with text style similar to the one I have. The same could be done earlier when finetuning [ULMFiT](https://github.com/sheikhazhanmohammed/AAAI-Constraint-Shared-Tasks-2021/blob/main/ULMFITTrainer.ipynb) and [BERT](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb). Tried to find any suitable resource, but couldn't find much. Any help would be really appreciated.
2023-11-25T14:24:33
https://www.reddit.com/r/LocalLLaMA/comments/183ktpo/fine_tune_llama_to_adapt_to_certain_text_style/
mayiSLYTHERINyourbed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183ktpo
false
null
t3_183ktpo
/r/LocalLLaMA/comments/183ktpo/fine_tune_llama_to_adapt_to_certain_text_style/
false
false
self
4
{'enabled': False, 'images': [{'id': 'bv-V-FxreUR5tbJeErxPHWQx4EWGt62yMkix156qZaU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-mCB0SxsLSDsT0g5U6D-ZLSkPQxLEXNvge-e3f1k3_Y.jpg?width=108&crop=smart&auto=webp&s=e42131e32485d2cc99a5ec8979bf72c9f823f9b2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-mCB0SxsLSDsT0g5U6D-ZLSkPQxLEXNvge-e3f1k3_Y.jpg?width=216&crop=smart&auto=webp&s=35f4ece54568c96b3ee92dd1099052ee4d7b0323', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-mCB0SxsLSDsT0g5U6D-ZLSkPQxLEXNvge-e3f1k3_Y.jpg?width=320&crop=smart&auto=webp&s=3996a45d93332e71882aa913cf703ea132d18b53', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-mCB0SxsLSDsT0g5U6D-ZLSkPQxLEXNvge-e3f1k3_Y.jpg?width=640&crop=smart&auto=webp&s=d7f28452327f2f6d743b74647e34eabdd9b5228e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-mCB0SxsLSDsT0g5U6D-ZLSkPQxLEXNvge-e3f1k3_Y.jpg?width=960&crop=smart&auto=webp&s=b687314875c99b870e8677466951bdd5437c1246', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-mCB0SxsLSDsT0g5U6D-ZLSkPQxLEXNvge-e3f1k3_Y.jpg?width=1080&crop=smart&auto=webp&s=eef4168e94b5b7d44843098b727600942263d47c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-mCB0SxsLSDsT0g5U6D-ZLSkPQxLEXNvge-e3f1k3_Y.jpg?auto=webp&s=c8ac2570c3cb486ff70b77914e665ac9c26dde23', 'width': 1200}, 'variants': {}}]}
code-llama2 + LangChain + chromadb - good for Q&A structured text?
8
I've been meaning to create a 32B local model of code-llama2 to help me with coding questions mostly. Sort of a personal KB (phind-33B, if you have better suggestions please let me know). I thought of using langchain + code-llama2 + chromadb. I did read around that this could be a good setup. A question though: I mostly have long markdown documents in the form of Q&A that I can RAG later \`\`\` question: how do you write a fibonacci C++ function recursive? answer: here's how: \`\`\`cpp blabla \`\`\`\` \`\`\` \-> would it make sense to use chromadb for that? I guess I could do better than just automatically splitting paragraphs but even indicate where one Q&A starts and where one ends, not sure if I could use chromadb or another DB with some sort of 'schema'. Thanks.
2023-11-25T14:20:18
https://www.reddit.com/r/LocalLLaMA/comments/183kqqt/codellama2_langchain_chromadb_good_for_qa/
Specialist_Ice_5715
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183kqqt
false
null
t3_183kqqt
/r/LocalLLaMA/comments/183kqqt/codellama2_langchain_chromadb_good_for_qa/
false
false
self
8
null
How to Build an AI Assistant in 4 Steps
1
2023-11-25T14:14:32
https://picockpit.com/raspberry-pi/raspigpt-ai-assistant/
cloud9engineer
picockpit.com
1970-01-01T00:00:00
0
{}
183kmw4
false
null
t3_183kmw4
/r/LocalLLaMA/comments/183kmw4/how_to_build_an_ai_assistant_in_4_steps/
false
false
https://a.thumbs.redditm…lmvbhztwMEL4.jpg
1
{'enabled': False, 'images': [{'id': '8KRxzsmD6XLbxXer_ldFFmnIomZuG9Z9ok4yylC47-I', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LqN88q9lIfmpae37XIIy4PowpIdfzXrMTtFGstOLW5E.jpg?width=108&crop=smart&auto=webp&s=abcb92dc39c57667bb1b8395dff0a62bdc811069', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/LqN88q9lIfmpae37XIIy4PowpIdfzXrMTtFGstOLW5E.jpg?width=216&crop=smart&auto=webp&s=b10433f107ad3d6ff8fd9a7fbc37722c767c7c30', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/LqN88q9lIfmpae37XIIy4PowpIdfzXrMTtFGstOLW5E.jpg?width=320&crop=smart&auto=webp&s=82cb625ba0b865901e75bf724362b2d145ff4c8f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/LqN88q9lIfmpae37XIIy4PowpIdfzXrMTtFGstOLW5E.jpg?width=640&crop=smart&auto=webp&s=cba76235957217fdafcfd11a4e0253cafcbb856d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/LqN88q9lIfmpae37XIIy4PowpIdfzXrMTtFGstOLW5E.jpg?width=960&crop=smart&auto=webp&s=e377b8aaaa4e4b75d56a3af971dd943c4722df31', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/LqN88q9lIfmpae37XIIy4PowpIdfzXrMTtFGstOLW5E.jpg?width=1080&crop=smart&auto=webp&s=d18bd5bfb472d4fb7a7aea2075703861c882e795', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/LqN88q9lIfmpae37XIIy4PowpIdfzXrMTtFGstOLW5E.jpg?auto=webp&s=2963fafd8d0f6372a5e8ade8023b919f4d2050b3', 'width': 1920}, 'variants': {}}]}
Could multiple 7b models outperform 70b models?
54
If i have multiple 7b models where each model is trained on one specific topic (e.g. roleplay, math, coding, history, politic...) and i have an interface which decides depending on the context which model to use. Could this outperform bigger models while being faster?
2023-11-25T13:42:40
https://www.reddit.com/r/LocalLLaMA/comments/183k0tx/could_multiple_7b_models_outperform_70b_models/
freehuntx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183k0tx
false
null
t3_183k0tx
/r/LocalLLaMA/comments/183k0tx/could_multiple_7b_models_outperform_70b_models/
false
false
self
54
null
Advice on building a local server-level computer
1
I’ve recently completed my first PC build and loved the process of researching everything. Now I’m wondering what is the most powerful system someone can build locally? I know about the typical consumer facing high end products like the RTX 4090 and Ryzen 9 7950x. But Ive also seen server hardware, like AMD EPYC CPU (seen in the recent post here: https://www.reddit.com/r/LocalLLaMA/s/FXMyFnEx3m), A100, and HPUs, that have much greater capabilities. My goal is to eventually build a machine powerful enough to run multiple open source LLMs unquantized. Say you had around $20,000 to build the most powerful home setup. How would you do it? Note: I realize that $20,000 overkill for my goal. Just assume this is a thought experiment and money is no object.
2023-11-25T13:00:15
https://www.reddit.com/r/LocalLLaMA/comments/183j8tk/advice_on_building_a_local_serverlevel_computer/
SirStagMcprotein
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183j8tk
false
null
t3_183j8tk
/r/LocalLLaMA/comments/183j8tk/advice_on_building_a_local_serverlevel_computer/
false
false
self
1
null
ctransformers VS llama-cpp-python which one should I use?
8
I'm on my way to deploy a GGUF model on Huggingface space (free hardware CPU and RAM).Currently I'm using a GGUF model because I need to run it using CPU. Later, I have plans to run AWQ models on GPU. I'm currently thinking about ctransformers or llama-cpp-python. Please suggest me which one should I use as a beginner with a plan of integrating llms with websites in future. &#x200B; Comparison Aspects Speed Computational Power Efficiency Readability and ease of use Popularity and availability of educational resources &#x200B; Extra Questions If I learn ctransformers, is it gonna help me when I will use the huggingface transformers library to load gpu based models in the future? Which one has more resources to solve problems? which one requires less code to run? consider all these aspects and you must choose one between the two &#x200B; Do I need to learn llama.cpp or C++ to deploy models using llama-cpp-python library? &#x200B; I used to run AWQ quantized models in my local machine and there is a huge difference in quality. Same model with same bit precision performs much, much worse in GGUF format compared to AWQ. Is there something wrong? Suggest me some fixes
2023-11-25T12:07:54
https://www.reddit.com/r/LocalLLaMA/comments/183ie9t/ctransformers_vs_llamacpppython_which_one_should/
ZiadHAsan23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183ie9t
false
null
t3_183ie9t
/r/LocalLLaMA/comments/183ie9t/ctransformers_vs_llamacpppython_which_one_should/
false
false
self
8
null
Rule of thumb for peft/lora
4
Is there a good way (or rule of thumb) to decide when looking at a problem if peft/lora finetuning might be successful or if it only makes sense to do a complete finetuning of all weights? Given the big difference in cost knowing if peft/lora might work for a problem feels pretty essential.
2023-11-25T11:49:23
https://www.reddit.com/r/LocalLLaMA/comments/183i3vg/rule_of_thumb_for_peftlora/
trollbrot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183i3vg
false
null
t3_183i3vg
/r/LocalLLaMA/comments/183i3vg/rule_of_thumb_for_peftlora/
false
false
self
4
null
What is the best 7b coding LLM till now ?
19
I want to try auto-gen locally. So I wanted to use a good coding LLM to work with it. Any suggestions ? My system only have 16 gb ram and can only run 7b model with ok speed.
2023-11-25T11:42:02
https://www.reddit.com/r/LocalLLaMA/comments/183hzz1/what_is_the_best_7b_coding_llm_till_now/
BlissfulEternalLotus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183hzz1
false
null
t3_183hzz1
/r/LocalLLaMA/comments/183hzz1/what_is_the_best_7b_coding_llm_till_now/
false
false
self
19
null
Realistically, how far i can push my current PC?
15
Title essentially. I'm currently running RTX 3060 with 12GB of VRAM, 32GB RAM and an i5-9600k. Been running 7B and 13B models effortlessly via KoboldCPP(i tend to offload all 35 layers to GPU for 7Bs, and 40 for 13Bs) + SillyTavern for role playing purposes, but slowdown becomes noticeable at higher context with 13Bs(Not too bad so i deal with it). Is this setup capable of running bigger models like 20B or potentially even 34B?
2023-11-25T10:50:27
https://www.reddit.com/r/LocalLLaMA/comments/183h8zb/realistically_how_far_i_can_push_my_current_pc/
constanzabestest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183h8zb
false
null
t3_183h8zb
/r/LocalLLaMA/comments/183h8zb/realistically_how_far_i_can_push_my_current_pc/
false
false
self
15
null
Wanna upgrade PC for LLMs
1
Hey guys, thinking of upgrading my PC, I’m a dev and i wanna run my own LLMs, it’s more to run my own copilot locally instead on relying on outside services. This is what I got now Ryzen 7 3700x 32GB RAM 5500XT Debating whether I should get a 3950x or 5800x3D as I can game abit better as well As for the GPU I might just go for the 4090, but if this is overkill please let me know. What you guys think?
2023-11-25T10:35:51
https://www.reddit.com/r/LocalLLaMA/comments/183h1tu/wanna_upgrade_pc_for_llms/
Ok_Brain_2376
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183h1tu
false
null
t3_183h1tu
/r/LocalLLaMA/comments/183h1tu/wanna_upgrade_pc_for_llms/
false
false
self
1
null
Currently, what are the practical applications of local language models?
2
Interested in real-world uses beyond just experimentation
2023-11-25T10:10:10
https://www.reddit.com/r/LocalLLaMA/comments/183gonf/currently_what_are_the_practical_applications_of/
cauIkasian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183gonf
false
null
t3_183gonf
/r/LocalLLaMA/comments/183gonf/currently_what_are_the_practical_applications_of/
false
false
self
2
null
Worth it running 2x 4090 at pcie 8x?
1
I’m fortunate to have access to 3 4090s, but each in seperate PCs. One of them supports dual gpus at pcie 8x, but there is not enough physical space, and I’m concerned about the pcie 8x as a bottleneck. Anyone else have experience with pcie8 and is there a noticeable performance drop? I also my train my own models so even a 10% slowdown can mean an extra 2 or 3 days of training. Or would it worth it even getting an old server motherboard?/cpu and putting all 3 gpus into a single pc? I can’t afford the new server grade motherboards. Thanks in advance!
2023-11-25T08:19:35
https://www.reddit.com/r/LocalLLaMA/comments/183f3ub/worth_it_running_2x_4090_at_pcie_8x/
Chance-Tell-9847
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183f3ub
false
null
t3_183f3ub
/r/LocalLLaMA/comments/183f3ub/worth_it_running_2x_4090_at_pcie_8x/
false
false
self
1
null
Running large models below requirements?
5
Question about the possibility of running large models on a 3070ti 32gb ram, what's the best way to run them if possible, without quality loss? Speed isn't an issue, just want to be able to run such models ambiently.
2023-11-25T08:03:25
https://www.reddit.com/r/LocalLLaMA/comments/183evev/running_large_models_below_requirements/
Maelstrom100
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183evev
false
null
t3_183evev
/r/LocalLLaMA/comments/183evev/running_large_models_below_requirements/
false
false
self
5
null
Coding Help - Hugging face pipelines with streaming?
2
I'm using pipeline from Transformers Library for interacting with my local models. pipeline("text-generation", model=model, tokenizer=tokenizer) How do I add streaming to this? I don't want to wait for the entire output to get generated. Can I make it stream token by token? &#x200B;
2023-11-25T08:00:26
https://www.reddit.com/r/LocalLLaMA/comments/183etjp/coding_help_hugging_face_pipelines_with_streaming/
zzKillswitchzz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183etjp
false
null
t3_183etjp
/r/LocalLLaMA/comments/183etjp/coding_help_hugging_face_pipelines_with_streaming/
false
false
self
2
null
How to give knowledge to a model
7
Hi! I’m quite new to LLMs and want to use it to make training workouts. My idea would be to feed it scientific studies and a bunch of example workouts. Is this what “training a model” is for? Any resource where I can start to learn how to train one? Can I use and already fine tuned model like Mistral, or do I need to train a base model like LLama2? Can I train a quantized model or do I need to use a vanilla one? And quantize it after training? I have 2x3090, 5950x and 64GB of Ram. If that matters. If I can load a model for inference can I train? Are the resources needed the same? Thanks!
2023-11-25T07:41:02
https://www.reddit.com/r/LocalLLaMA/comments/183ejxq/how_to_give_knowledge_to_a_model/
bullerwins
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183ejxq
false
null
t3_183ejxq
/r/LocalLLaMA/comments/183ejxq/how_to_give_knowledge_to_a_model/
false
false
self
7
null
MBP M3 max for Local LLama?
5
I want to buy a laptop or build a machine powerful enough that I can run these LLMs locally. I am ok with either investing in desktop or MBP though MBP was lucrative with a capability to run these models on laptop itself. Any pointers would be helpful. I tried researching but so much of info is there that I got spooked. Any initial pointers would really help. Thank you!
2023-11-25T06:48:46
https://www.reddit.com/r/LocalLLaMA/comments/183dsc9/mbp_m3_max_for_local_llama/
itsnotmeyou
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183dsc9
false
null
t3_183dsc9
/r/LocalLLaMA/comments/183dsc9/mbp_m3_max_for_local_llama/
false
false
self
5
null
Has anyone here personally ran a GPU in a usb pci-e riser and ran it for inference.
1
[removed]
2023-11-25T06:20:57
https://www.reddit.com/r/LocalLLaMA/comments/183dd54/has_anyone_here_personally_ran_a_gpu_in_a_usb/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183dd54
false
null
t3_183dd54
/r/LocalLLaMA/comments/183dd54/has_anyone_here_personally_ran_a_gpu_in_a_usb/
false
false
self
1
null
Identity-PO: DeepMind takes the ELO out of DPO
35
https://x.com/kylemarieb/status/1728281581306233036 New DeepMind paper just dropped. Background: Direct Preference Optimization (DPO) is the simpler, more robust, higher performing successor of RLHF used in Zephyr, Intel’s new model, and others. Identity-PO simplifies DPO, removing its reliance on ELO scores (and the mathematical assumptions that come with them). The authors claim this solves overfitting, which is huge if true.
2023-11-25T06:00:35
https://www.reddit.com/r/LocalLLaMA/comments/183d0t6/identitypo_deepmind_takes_the_elo_out_of_dpo/
georgejrjrjr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183d0t6
false
null
t3_183d0t6
/r/LocalLLaMA/comments/183d0t6/identitypo_deepmind_takes_the_elo_out_of_dpo/
false
false
self
35
{'enabled': False, 'images': [{'id': '6Ex0HUiPbL92TKkjOILM5RuoH7TiQ_aevzIUewpLaYI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/n7i5_hM3kSD9hU83eiBh5DRTkLaDRF9iJT65jIOiqOc.jpg?width=108&crop=smart&auto=webp&s=7d28dbf079d34a9d1a4cd0dd02da26a9d20871b3', 'width': 108}], 'source': {'height': 200, 'url': 'https://external-preview.redd.it/n7i5_hM3kSD9hU83eiBh5DRTkLaDRF9iJT65jIOiqOc.jpg?auto=webp&s=7888fe3c791c8e779de48a62e9a5dcaabab4623b', 'width': 200}, 'variants': {}}]}
LocalAI speed stats
1
Can anyone share what are the speeds for them running ai model locally? If they have 8-10 core cpu with 32 gb ram? Running mistral what is the ideal speed. Can it support simultaneous requests, etc.
2023-11-25T05:44:36
https://www.reddit.com/r/LocalLLaMA/comments/183crsa/localai_speed_stats/
gpt872323
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183crsa
false
null
t3_183crsa
/r/LocalLLaMA/comments/183crsa/localai_speed_stats/
false
false
self
1
null
Real talk - 70Bs are WAY better than the smaller models.
236
Am I the only one who thinks there is some heavy wishful thinking going on in regard to the latest 34Bs (and even 13B/7B) being on the same level as the 70Bs like lzlv? Using lzlv70b q4_k_m *feels* like using an uncensored version of GPT 3.5. It understands what you want, deals with mediocre prompts or formatting inconsistencies, and delivers great results - usually first try. By comparison, the smaller models feel very finicky and underwhelming. You have to handhold them and give them very careful prompts, and then it might take a few tries to get anything worthwhile. It also feels like you're often choosing between illogical creativity and uninspired logic, while the 70B will give you output that is both interesting and logical. Believe me, I'd love it if the 34Bs and friends were as amazing as people make them seem, but I find them... not that great. Even though I can only get about 2 tokens / second with the 70Bs at q4_k_m, I find myself consistently disappointed whenever I try something smaller despite the massive speed increases. Am I crazy or is there still a huge moat between the 70Bs and the smaller models that people aren't making a big enough deal about when discussing these things?
2023-11-25T04:43:06
https://www.reddit.com/r/LocalLLaMA/comments/183bqei/real_talk_70bs_are_way_better_than_the_smaller/
Sea_Particular_4014
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183bqei
false
null
t3_183bqei
/r/LocalLLaMA/comments/183bqei/real_talk_70bs_are_way_better_than_the_smaller/
false
false
self
236
null
Model for legal writing?
7
Wondering if anyone has managed to adapt a model to generate writing in a legal style (say, based on a corpus of appellate briefs and opinions). To be clear, I’m not looking for something that learns the law and can go wild writing about it. What I am hoping to find: something I can give draft text to, that then revises in a style consistent with examples of good legal writing. One of these days, I might try to figure this out. Any input appreciated!
2023-11-25T03:46:34
https://www.reddit.com/r/LocalLLaMA/comments/183aqxn/model_for_legal_writing/
Hinged31
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
183aqxn
false
null
t3_183aqxn
/r/LocalLLaMA/comments/183aqxn/model_for_legal_writing/
false
false
self
7
null
Train Smarter, Not Harder? - MiniSymposium 7b
75
[https://huggingface.co/kalomaze/MiniSymposium-Demo](https://huggingface.co/kalomaze/MiniSymposium-Demo) MiniSymposium is an experimental model that I created based on Mistral 7b. I created it attempting to test these goals: 1. Demonstrate the untapped potential of using a small, focused dataset of handwritten examples instead of training on a large amount of synthetic GPT outputs, by lowering the learning rate and doing many passes over the small dataset 2. Create a dataset that allows the model to explore different possible answers from multiple perspectives before reaching a final conclusion ('Socratic prompting'?) 3. Develop a model that performs well across various prompt formats, rather than overfitting to a specific kind of format, which should naturally extend to other general purpose use cases The current trend in QLora/Lora-based finetuning (and finetuning in general for local LLMs) is to use large synthetic datasets. These are typically GPT-generated datasets trained with higher learning rates. However, I believe there is a lot of potential in using small, hand-written datasets with low learning rates, even if it's for general-purpose instruction following, as long as you train it for many epochs on a learning rate low enough to avoid overfitting. This approach, I hypothesize, helps the model to learn the deeper patterns of instruction following , including the small details. This should help to avoid shallow data biases (like "As an AI made by OpenAI" and other GPT-isms) that are irrelevant to deeper instruction following patterns, especially in long context and multiturn scenarios. My initial configuration for this QLora model used a constant learning rate of 1e-6 (0.000001), which resulted in obvious, massive overfitting after about 100 epochs. The model started reproducing the original dataset almost verbatim, and exhibited poor generalization across different prompt formats, including obvious hallucinations & also Chinese language outputs for some reason. However, turning down the learning rate to 1/10th of (1e-7, which is 0.0000001) significantly improved the model with the same exact small dataset. I trained for about \~10 hours on my RTX 3060 to 600 epochs; I think it's still a little undertrained, but I encourage people to try the demo model out in the meantime. https://preview.redd.it/54imvd09ee2c1.png?width=1561&format=png&auto=webp&s=a0e603f5f5a960189b0d225ab5581f2a0339d12d https://preview.redd.it/al6gmpuaee2c1.png?width=1132&format=png&auto=webp&s=5704aa41e87a5555664405d2f0178287bd7bde35 https://preview.redd.it/7fs90ictee2c1.png?width=1140&format=png&auto=webp&s=7f94c1d76493673d83e0d066efe9f43e21205fe7 It's designed to be very adaptable to different prompt formats and playing roles, and I've gotten some fun and sometimes surprisingly good outputs so far. A few samples of the training data are formatted like this to help avoid blatant overconfidence in its outputs, to serve as a sort of self-correction mechanism: https://preview.redd.it/vlmyw1smfe2c1.png?width=2448&format=png&auto=webp&s=4c2cfea77188b9529c2c0c1c1fe29af9d152f0bf Let me know how this model goes. There's lots of merges of models that are all sort of doing the same thing, so I figured a more experimental approach would be appreciated. I think there is still more optimization for LR/epoch balance, and I'll probably add some more examples of specific tasks like Summarization in the dataset so that it's not \*too\* small (but still lightweight enough to generalize well).
2023-11-25T01:52:21
https://www.reddit.com/r/LocalLLaMA/comments/1838hf0/train_smarter_not_harder_minisymposium_7b/
kindacognizant
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1838hf0
false
null
t3_1838hf0
/r/LocalLLaMA/comments/1838hf0/train_smarter_not_harder_minisymposium_7b/
false
false
https://b.thumbs.redditm…VD3jSCcFo73U.jpg
75
{'enabled': False, 'images': [{'id': 'hjvz7eNK6YIRNrIOTYxaoi0WXvXe2N4PSKM_GUQh5do', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-jeWhYyCBUUr6piQJUyDbkF58WsGlPORd81TdOVjALk.jpg?width=108&crop=smart&auto=webp&s=7fb93120e017d3bb6b663c564fb9720669dfa8b8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-jeWhYyCBUUr6piQJUyDbkF58WsGlPORd81TdOVjALk.jpg?width=216&crop=smart&auto=webp&s=03314a5173130edf8973b6839ee7ca60c202012f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-jeWhYyCBUUr6piQJUyDbkF58WsGlPORd81TdOVjALk.jpg?width=320&crop=smart&auto=webp&s=2ac50ed11373f5469cdf141d11f633b4b7463651', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-jeWhYyCBUUr6piQJUyDbkF58WsGlPORd81TdOVjALk.jpg?width=640&crop=smart&auto=webp&s=ab8157fe4a4eb453c0bcfdec288f491bd85b1cf3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-jeWhYyCBUUr6piQJUyDbkF58WsGlPORd81TdOVjALk.jpg?width=960&crop=smart&auto=webp&s=4f0d9384e25e6596b5a7536fd944d7107fb84adf', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-jeWhYyCBUUr6piQJUyDbkF58WsGlPORd81TdOVjALk.jpg?width=1080&crop=smart&auto=webp&s=3a9680c40da45bb2b9dd68490bd544e898dc71d4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-jeWhYyCBUUr6piQJUyDbkF58WsGlPORd81TdOVjALk.jpg?auto=webp&s=e217d9b8976bc0d85bc545e221d4ceb4f63c07b7', 'width': 1200}, 'variants': {}}]}
Hey Guys! Check out Amica: open source locally run LLM interface (use with any LLM) for communication with 3D characters. Featuring customizable natural voice chat, emotion system, MML (visual understanding).
122
2023-11-25T01:20:37
https://twitter.com/arbius_ai/status/1727135805041209770
slowsynapse
twitter.com
1970-01-01T00:00:00
0
{}
1837tvr
false
{'oembed': {'author_name': 'Arbius', 'author_url': 'https://twitter.com/arbius_ai', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Introducing... Amica! 👩\u200d🦰🤖<br><br>Amica is an open source interface for interactive communication with 3D characters with voice synthesis, speech recognition, visual understanding, and an emotion system.<br><br>🔗 <a href="https://t.co/93OmnNv7Bw">https://t.co/93OmnNv7Bw</a> <a href="https://t.co/vdIMSleyOY">pic.twitter.com/vdIMSleyOY</a></p>&mdash; Arbius (@arbius_ai) <a href="https://twitter.com/arbius_ai/status/1727135805041209770?ref_src=twsrc%5Etfw">November 22, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/arbius_ai/status/1727135805041209770', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_1837tvr
/r/LocalLLaMA/comments/1837tvr/hey_guys_check_out_amica_open_source_locally_run/
false
false
https://b.thumbs.redditm…23P_TWXWpHLE.jpg
122
{'enabled': False, 'images': [{'id': 't5OrUK912rS1OgphZtSuYZyH1Mc_E8MEhkTj6bvwNe4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/XfDWcW5e3O1R1_3bzjtB59N3q81KW7d5ZEKi8AGMF4E.jpg?width=108&crop=smart&auto=webp&s=0f38c48045379c574e985d6594af6b545a6faba7', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/XfDWcW5e3O1R1_3bzjtB59N3q81KW7d5ZEKi8AGMF4E.jpg?auto=webp&s=044f54f705fd9723b9737ebc0986d4520b8bf25c', 'width': 140}, 'variants': {}}]}
Open Source LLM Training Code for > 20B model
1
I am trying to find a good open source code for fine-tuning an lllm for instruction tuning. Specially looking for deep speed, hugging face etc
2023-11-25T00:55:16
https://www.reddit.com/r/LocalLLaMA/comments/1837ax0/open_source_llm_training_code_for_20b_model/
rodeowrong
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1837ax0
false
null
t3_1837ax0
/r/LocalLLaMA/comments/1837ax0/open_source_llm_training_code_for_20b_model/
false
false
self
1
null
PEG (Progressively Learned Textual Embedding)
11
2023-11-25T00:53:12
https://huggingface.co/TownsWu/PEG
Thistleknot
huggingface.co
1970-01-01T00:00:00
0
{}
18379dq
false
null
t3_18379dq
/r/LocalLLaMA/comments/18379dq/peg_progressively_learned_textual_embedding/
false
false
https://b.thumbs.redditm…62WzhdJomVuY.jpg
11
{'enabled': False, 'images': [{'id': 'JaAcahcOO3KuzH41gnrCeXi2LQOchHx3ZrXWDix4xKo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/w8s9_ekQK7_EXx--wozI8JxQjM4-RdHwpwroW8nk2vk.jpg?width=108&crop=smart&auto=webp&s=c118ff4279e11a00e4e285e6a4301788c70b2801', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/w8s9_ekQK7_EXx--wozI8JxQjM4-RdHwpwroW8nk2vk.jpg?width=216&crop=smart&auto=webp&s=b829a85e39ded4533536b06418902bf142bf25dd', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/w8s9_ekQK7_EXx--wozI8JxQjM4-RdHwpwroW8nk2vk.jpg?width=320&crop=smart&auto=webp&s=5e792744de0129c0a4c243d0b6077020addacb3b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/w8s9_ekQK7_EXx--wozI8JxQjM4-RdHwpwroW8nk2vk.jpg?width=640&crop=smart&auto=webp&s=48ef803891b923d72fb28dfffd69531d9144e040', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/w8s9_ekQK7_EXx--wozI8JxQjM4-RdHwpwroW8nk2vk.jpg?width=960&crop=smart&auto=webp&s=ed6fab4921801f573f2f21c2a40effac3cda3d3a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/w8s9_ekQK7_EXx--wozI8JxQjM4-RdHwpwroW8nk2vk.jpg?width=1080&crop=smart&auto=webp&s=884b7391a97a5dcce28522c7ab258faa2d2a763a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/w8s9_ekQK7_EXx--wozI8JxQjM4-RdHwpwroW8nk2vk.jpg?auto=webp&s=25787bf7fad012213ac22e2af550d829db03309c', 'width': 1200}, 'variants': {}}]}
Sub Sentence Encoder
1
2023-11-25T00:52:24
https://github.com/schen149/sub-sentence-encoder
Thistleknot
github.com
1970-01-01T00:00:00
0
{}
18378sn
false
null
t3_18378sn
/r/LocalLLaMA/comments/18378sn/sub_sentence_encoder/
false
false
https://b.thumbs.redditm…YJAQdu1yHMhc.jpg
1
{'enabled': False, 'images': [{'id': 'o6Rcj1rA0Lo6as64GaMw27r8aQRn0STMx0SGykVJD_A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PJBqCs6zme6TO2th9nnc6LrlCfztNXIlzxFXGq5hgP0.jpg?width=108&crop=smart&auto=webp&s=ef55da4af555e5de807f7efa493649f5e313663f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PJBqCs6zme6TO2th9nnc6LrlCfztNXIlzxFXGq5hgP0.jpg?width=216&crop=smart&auto=webp&s=69f96628aee375f1b8e38c74fcf59033c61a3cea', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PJBqCs6zme6TO2th9nnc6LrlCfztNXIlzxFXGq5hgP0.jpg?width=320&crop=smart&auto=webp&s=4f7236ddf36e23e60fbd4056ea4785aec4528798', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PJBqCs6zme6TO2th9nnc6LrlCfztNXIlzxFXGq5hgP0.jpg?width=640&crop=smart&auto=webp&s=d66729bc6797d31aedd57d22ad542fb41b0202ea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PJBqCs6zme6TO2th9nnc6LrlCfztNXIlzxFXGq5hgP0.jpg?width=960&crop=smart&auto=webp&s=84cb7c7ef46d4851e991f0279536c8df0bd6bd12', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PJBqCs6zme6TO2th9nnc6LrlCfztNXIlzxFXGq5hgP0.jpg?width=1080&crop=smart&auto=webp&s=9af20625a4198a5d74151753984896c96e839468', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PJBqCs6zme6TO2th9nnc6LrlCfztNXIlzxFXGq5hgP0.jpg?auto=webp&s=f6dfd1d605c425c757b7f5eb2bd574c82ad44015', 'width': 1200}, 'variants': {}}]}
Looking for hardware and model recommendations -- data center install
2
I'd like to run a local model on a server in a data center. Assumptions: * Lots of DDR4 RAM (assume 256GB with most available to the AI work) and storage, but no standalone GPU. * Looking for a web frontend and Auto GPT-ish ability (i.e., it can search for answers on the web) if such a thing is currently available with locally hosted models. * I'd like to train it on policy reports (assume 10-12 pages of text per report), but I can live without this if training is wholly off the table without a dedicated GPU. I'm very comfortable with Linux, running servers, virtual environments, etc. But not spun up on the latest in locally hosted LLMs. Assume I'm an idiot about all of this and point me in the right direction? Thanks!
2023-11-25T00:42:47
https://www.reddit.com/r/LocalLLaMA/comments/1837172/looking_for_hardware_and_model_recommendations/
GWBrooks
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1837172
false
null
t3_1837172
/r/LocalLLaMA/comments/1837172/looking_for_hardware_and_model_recommendations/
false
false
self
2
null
Break the Sequential Dependency of LLM Inference Using Lookahead Decoding
22
[https://lmsys.org/blog/2023-11-21-lookahead-decoding/](https://lmsys.org/blog/2023-11-21-lookahead-decoding/)
2023-11-25T00:35:12
https://www.reddit.com/r/LocalLLaMA/comments/1836vg8/break_the_sequential_dependency_of_llm_inference/
Thistleknot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1836vg8
false
null
t3_1836vg8
/r/LocalLLaMA/comments/1836vg8/break_the_sequential_dependency_of_llm_inference/
false
false
self
22
{'enabled': False, 'images': [{'id': 'icR3Ob4zLCI1nSlpjTQftasiFrBqERZYFRe8wZ4MLW4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NWLS2w-ftbEXOJMJyKnNs_P4sQEd-fzyjuxzzB9YOS4.jpg?width=108&crop=smart&auto=webp&s=9bf1b43dfbc0251c3053c625fb4c1fbb794ff804', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NWLS2w-ftbEXOJMJyKnNs_P4sQEd-fzyjuxzzB9YOS4.jpg?width=216&crop=smart&auto=webp&s=491b47a8f82a9d9ff67b0b12746ee54e9699e547', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/NWLS2w-ftbEXOJMJyKnNs_P4sQEd-fzyjuxzzB9YOS4.jpg?width=320&crop=smart&auto=webp&s=c7b980c7fa2ae14fcbd7d5f73155316b4d139f92', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/NWLS2w-ftbEXOJMJyKnNs_P4sQEd-fzyjuxzzB9YOS4.jpg?width=640&crop=smart&auto=webp&s=d0089cd60bdacd51f916f5e28d0159e3495e5984', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/NWLS2w-ftbEXOJMJyKnNs_P4sQEd-fzyjuxzzB9YOS4.jpg?width=960&crop=smart&auto=webp&s=2980295730b827acbc4cb0f15106280e4a67b837', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NWLS2w-ftbEXOJMJyKnNs_P4sQEd-fzyjuxzzB9YOS4.jpg?width=1080&crop=smart&auto=webp&s=f551ab4663709a8b4860007637150c6584832d5d', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NWLS2w-ftbEXOJMJyKnNs_P4sQEd-fzyjuxzzB9YOS4.jpg?auto=webp&s=95f3a87e53ab764a5b1e9ec838d0b0469d624620', 'width': 1920}, 'variants': {}}]}
Are 7b models useful?
1
I'm struggling to get the 7b models to do something useful, obviously I'm doing something wrong as it appears many people strive for 7b models. But myself I can not get them to follow instructions, they keep repeating stuff and occasionally they start to converse with themselves. Does anyone have any pointers what I'm doing wrong?
2023-11-24T23:10:57
https://www.reddit.com/r/LocalLLaMA/comments/1834yqv/are_7b_models_useful/
Naiw80
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1834yqv
false
null
t3_1834yqv
/r/LocalLLaMA/comments/1834yqv/are_7b_models_useful/
false
false
self
1
null
How do i save now?
1
I am using the simplified ooba colab, but now It says that I am supposed to use the new colab link, since the simplified one is deprecated. however the new Ooba colab has no option to turn on saving to Google drive, so what do I do? how do I turn it on? I want to keep all my characters and chats
2023-11-24T22:18:09
https://www.reddit.com/r/LocalLLaMA/comments/1833rkw/how_do_i_save_now/
warpwaver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1833rkw
false
null
t3_1833rkw
/r/LocalLLaMA/comments/1833rkw/how_do_i_save_now/
false
false
self
1
null
Figuring out Model chat/instruction template
1
[removed]
2023-11-24T21:49:42
https://www.reddit.com/r/LocalLLaMA/comments/18333we/figuring_out_model_chatinstruction_template/
Double-Lavishness-77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18333we
false
null
t3_18333we
/r/LocalLLaMA/comments/18333we/figuring_out_model_chatinstruction_template/
false
false
self
1
{'enabled': False, 'images': [{'id': 'urd-gOpHx6DzqXeQqsy2yaeJA0EJHFkUW198WyZ0Q3A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hm598yQoikDCgFDSunkqOrBDTvoD2MlbWzpw7gi4pAQ.jpg?width=108&crop=smart&auto=webp&s=a1e0ec1bb3f0ad5c06a4fe492777a0b2a7302f42', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hm598yQoikDCgFDSunkqOrBDTvoD2MlbWzpw7gi4pAQ.jpg?width=216&crop=smart&auto=webp&s=9beae58010081cd0a32e781b7fcf26161bd34b48', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hm598yQoikDCgFDSunkqOrBDTvoD2MlbWzpw7gi4pAQ.jpg?width=320&crop=smart&auto=webp&s=f1be40af369575d271bbd99509b38994b48958d4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hm598yQoikDCgFDSunkqOrBDTvoD2MlbWzpw7gi4pAQ.jpg?width=640&crop=smart&auto=webp&s=03a969d7c7655da05009cd1b9408ac2acf88014c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hm598yQoikDCgFDSunkqOrBDTvoD2MlbWzpw7gi4pAQ.jpg?width=960&crop=smart&auto=webp&s=3a42bdc92d0d8cf16b80362f176d150011b96c98', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hm598yQoikDCgFDSunkqOrBDTvoD2MlbWzpw7gi4pAQ.jpg?width=1080&crop=smart&auto=webp&s=919ca5f87bbfa468397ff26792b476c5f410e322', 'width': 1080}], 'source': {'height': 1160, 'url': 'https://external-preview.redd.it/hm598yQoikDCgFDSunkqOrBDTvoD2MlbWzpw7gi4pAQ.jpg?auto=webp&s=d3e91289313c483b1c7eabb487635993662a19a5', 'width': 2320}, 'variants': {}}]}
Best model to run in 4090
1
I am looking for a model that I can run in my pc with 32gb ram 4090 i913900. I dont want it to code. Tried mistral 7b but it is not good as I excpected. I will use it to create prompts for stable diffusion so it should be good at understanding scenes and terms. thanks
2023-11-24T21:45:49
https://www.reddit.com/r/LocalLLaMA/comments/18330qp/best_model_to_run_in_4090/
sengunsipahi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
18330qp
false
null
t3_18330qp
/r/LocalLLaMA/comments/18330qp/best_model_to_run_in_4090/
false
false
self
1
null
Tokens per Second
2
Hey All, I have few doubts about method to calculate tokens per second of LLM model. 1) The way I calculate tokens per second of my fine-tuned models is, I put timer in my python code and calculate tokens per second. So if length of my output tokens is 20 and model took 5 seconds then tokens per second is 4. Am I using correct method or is there any other better method for this? 2) If tokens per second of my model is 4 on 8 GB VRAM then will it be 8 tokens per second on 16 GB VRAM?
2023-11-24T21:35:37
https://www.reddit.com/r/LocalLLaMA/comments/1832sgy/tokens_per_second/
meetrais
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1832sgy
false
null
t3_1832sgy
/r/LocalLLaMA/comments/1832sgy/tokens_per_second/
false
false
self
2
null
Open community list of uncensored chatbots
1
[removed]
2023-11-24T21:26:49
https://www.reddit.com/r/LocalLLaMA/comments/1832l7x/open_community_list_of_uncensored_chatbots/
screw_char_ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1832l7x
false
null
t3_1832l7x
/r/LocalLLaMA/comments/1832l7x/open_community_list_of_uncensored_chatbots/
false
false
https://b.thumbs.redditm…DEN-PoezMa-M.jpg
1
null
Performance: Closed versus Open Source - Big Question Mark???
1
[removed]
2023-11-24T20:52:15
https://www.reddit.com/r/LocalLLaMA/comments/1831sf0/performance_closed_versus_open_source_big/
Leading-Swim-7937
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1831sf0
false
null
t3_1831sf0
/r/LocalLLaMA/comments/1831sf0/performance_closed_versus_open_source_big/
false
false
self
1
{'enabled': False, 'images': [{'id': 'FkVvq4uYx2-2BwhrjTj9x4BlOXJR1hc1P0bpBMpsPQk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/eqSCsaydLBe2O75sbMSTlZQYHiYoQzjt39-pGKD8jm8.jpg?width=108&crop=smart&auto=webp&s=e8e2123e416165a16787a631a0991c50a3eabcb1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/eqSCsaydLBe2O75sbMSTlZQYHiYoQzjt39-pGKD8jm8.jpg?width=216&crop=smart&auto=webp&s=276e0aa224e4b70d7dd0466ce1bda9c7cf4d75ce', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/eqSCsaydLBe2O75sbMSTlZQYHiYoQzjt39-pGKD8jm8.jpg?width=320&crop=smart&auto=webp&s=5a08acb34df7df40da66f99ebcde9d3a03022127', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/eqSCsaydLBe2O75sbMSTlZQYHiYoQzjt39-pGKD8jm8.jpg?width=640&crop=smart&auto=webp&s=8a072668741811854053ce83a0dc41f1e519e372', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/eqSCsaydLBe2O75sbMSTlZQYHiYoQzjt39-pGKD8jm8.jpg?width=960&crop=smart&auto=webp&s=7fa1f8eec1666fa68f40e2b543d79bfcfaf36672', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/eqSCsaydLBe2O75sbMSTlZQYHiYoQzjt39-pGKD8jm8.jpg?width=1080&crop=smart&auto=webp&s=9adc94efddd6c2f912b6f03504e25ac3f1007e4a', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/eqSCsaydLBe2O75sbMSTlZQYHiYoQzjt39-pGKD8jm8.jpg?auto=webp&s=ddd9f69a156c06181300cbee1b10c5de57548b35', 'width': 1200}, 'variants': {}}]}
What is the easiest way for a noob using a Windows PC to locally host their own AI?
1
I want to dabble on locally hosted language models. Is there a "for dummies" installer for this?
2023-11-24T20:07:42
https://www.reddit.com/r/LocalLLaMA/comments/1830t2v/what_is_the_easiest_way_for_a_noob_using_a/
survivingmonday
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1830t2v
false
null
t3_1830t2v
/r/LocalLLaMA/comments/1830t2v/what_is_the_easiest_way_for_a_noob_using_a/
false
false
self
1
null
Bright Eye: Free multipurpose AI service for all your AI services!
1
[removed]
2023-11-24T19:50:21
https://www.reddit.com/r/LocalLLaMA/comments/1830exq/bright_eye_free_multipurpose_ai_service_for_all/
JanetJackson_32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1830exq
false
null
t3_1830exq
/r/LocalLLaMA/comments/1830exq/bright_eye_free_multipurpose_ai_service_for_all/
false
false
self
1
{'enabled': False, 'images': [{'id': 'j9BOoAGSccutND6ogshNyb-xWVFtmdUvHV_lLdzYeVk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=108&crop=smart&auto=webp&s=76adf9aa07171352e3727b8d8bde812b5eb0f7ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=216&crop=smart&auto=webp&s=f180f4caf096ee042b700d70cabf941788671fda', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=320&crop=smart&auto=webp&s=204c440e43a68783d15356bdc60ce239349a1d9f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=640&crop=smart&auto=webp&s=2689ba6aac51113255e3e6e8acb33870740831e1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=960&crop=smart&auto=webp&s=64ad438b94a08899677dfb6f2de708f8a7a6a81b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=1080&crop=smart&auto=webp&s=0108876ef023c871780ce4ec2584fa4cb25c499c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?auto=webp&s=9035877a533fae2ee246bfc56769d60e805f7b74', 'width': 1200}, 'variants': {}}]}
Bright Eye: Free multipurpose AI service for all your AI services!
1
[removed]
2023-11-24T19:50:20
https://www.reddit.com/r/LocalLLaMA/comments/1830ex5/bright_eye_free_multipurpose_ai_service_for_all/
JanetJackson_32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1830ex5
false
null
t3_1830ex5
/r/LocalLLaMA/comments/1830ex5/bright_eye_free_multipurpose_ai_service_for_all/
false
false
self
1
{'enabled': False, 'images': [{'id': 'j9BOoAGSccutND6ogshNyb-xWVFtmdUvHV_lLdzYeVk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=108&crop=smart&auto=webp&s=76adf9aa07171352e3727b8d8bde812b5eb0f7ab', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=216&crop=smart&auto=webp&s=f180f4caf096ee042b700d70cabf941788671fda', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=320&crop=smart&auto=webp&s=204c440e43a68783d15356bdc60ce239349a1d9f', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=640&crop=smart&auto=webp&s=2689ba6aac51113255e3e6e8acb33870740831e1', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=960&crop=smart&auto=webp&s=64ad438b94a08899677dfb6f2de708f8a7a6a81b', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?width=1080&crop=smart&auto=webp&s=0108876ef023c871780ce4ec2584fa4cb25c499c', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/V0FBs4ANBGpyFs_PZ9-yzcRHWxeq6OF-hv624Ti8Fs0.jpg?auto=webp&s=9035877a533fae2ee246bfc56769d60e805f7b74', 'width': 1200}, 'variants': {}}]}
How do I pick a model for a given hardware?
1
[removed]
2023-11-24T19:08:20
https://www.reddit.com/r/LocalLLaMA/comments/182zhbn/how_do_i_pick_a_model_for_a_given_hardware/
Yugen42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
182zhbn
false
null
t3_182zhbn
/r/LocalLLaMA/comments/182zhbn/how_do_i_pick_a_model_for_a_given_hardware/
false
false
self
1
{'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=108&crop=smart&auto=webp&s=90f4efd1c1314faf5b0cd1c5eeb8d2835fe4a3ba', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=216&crop=smart&auto=webp&s=062336de177b9f9f124a98f4e03b59faa819be1d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=320&crop=smart&auto=webp&s=01e9aeccc0d76fee4ecb359bfb6238dc2afd87f0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=640&crop=smart&auto=webp&s=723b41bc410ff59454cf7a9a3db4eced43d4868f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=960&crop=smart&auto=webp&s=2c43ffe72f7f32d522c3e85c1aa8e25d6f213b38', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?width=1080&crop=smart&auto=webp&s=6a7d63eae44237642a3f95e586436bf6efe5dd70', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tSZUq9622TYSgtzH4foFlGwz9n9ixCJUAgev8O2x8jI.jpg?auto=webp&s=51349e0b781d1c9e91535974e09833705c76a3cc', 'width': 1200}, 'variants': {}}]}
Best LLM for code understanding
2
Please someone suggest open source LLM which can understand code and can tell what are the improvement we can make in it.
2023-11-24T18:44:32
https://www.reddit.com/r/LocalLLaMA/comments/182yxfi/best_llm_for_code_understanding/
Far-Scientist-4096
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
182yxfi
false
null
t3_182yxfi
/r/LocalLLaMA/comments/182yxfi/best_llm_for_code_understanding/
false
false
self
2
null
Need some help.
1
I have NVIDIA RTX 3050 with 4GB VRAM and RAM - 8GB Can y'all suggest me some LLMs to fine-tune according to my requirements?
2023-11-24T17:20:32
https://www.reddit.com/r/LocalLLaMA/comments/182x35s/need_some_help/
Vkarthikbeatz-3827
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
182x35s
false
null
t3_182x35s
/r/LocalLLaMA/comments/182x35s/need_some_help/
false
false
self
1
null
Technical question about hardware limits
1
I want to begin by saying my specs are an rtx 4080 with 16gb VRAM + 32GB regular ram. I've managed to run chronoboros 33b model pretty smoothly, even though a tad slow. Yet I've ran into hardware issues (I think) trying to run TheBloke/Capybara-Tess-Yi-34B-200K-GPTQ and Panchovix/WizardLM-33B-V1.0-Uncensored-SuperHOT-8k (Tried both AWQ and GPTQ), is there a reason models with a pretty similar amount of parameters won't run?
2023-11-24T17:20:28
https://www.reddit.com/r/LocalLLaMA/comments/182x34e/technical_question_about_hardware_limits/
Mobile-Bandicoot-553
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
182x34e
false
null
t3_182x34e
/r/LocalLLaMA/comments/182x34e/technical_question_about_hardware_limits/
false
false
self
1
null
AMD EPYC CPU or 1x RTX 4090?
14
I plan to infer 33B models at full precision, 70B is second priority but a nice touch. Would I be better off getting an AMD EPYC server cpu like [this](https://www.ebay.com/itm/185839904091?_trkparms=amclksrc%3DITM%26aid%3D777008%26algo%3DPERSONAL.TOPIC%26ao%3D1%26asc%3D20230823115209%26meid%3Dc83f1903e1b744308866ff9ae0bf7d3d%26pid%3D101800%26rk%3D1%26rkt%3D1%26sd%3D185839904091%26itm%3D185839904091%26pmt%3D1%26noa%3D1%26pg%3D4375194%26algv%3DRecentlyViewedItemsV2SignedOut%26brand%3DAMD&_trksid=p4375194.c101800.m5481&_trkparms=parentrq%3A024d101b18c0a24212bcdbe3ffffc03c%7Cpageci%3Af5d7ebd7-8aeb-11ee-a352-9eab04fc32fd%7Ciid%3A1%7Cvlpname%3Avlp_homepage) or a RTX 4090? With the EPYC, i am able to get 384GB DDR4 RAM for \~400USD on ebay, the 4090 only has 24GB. Moreover, both the 4090 and EPYC setup + ram cost about the same. which would be a better buy?
2023-11-24T17:10:04
https://www.reddit.com/r/LocalLLaMA/comments/182wutt/amd_epyc_cpu_or_1x_rtx_4090/
M000lie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
182wutt
false
null
t3_182wutt
/r/LocalLLaMA/comments/182wutt/amd_epyc_cpu_or_1x_rtx_4090/
false
false
self
14
{'enabled': False, 'images': [{'id': 'KKwMFEo7D7bfQT5qdbE-RHdAfIYHYeddh-0nP64LqOw', 'resolutions': [{'height': 95, 'url': 'https://external-preview.redd.it/8tXAk6kCAEoZSiUbRx-IB2ZAv52vXl--tE5wBSn_5V8.jpg?width=108&crop=smart&auto=webp&s=dafd091c0c848bdfb493a31c1d192f7999bc412d', 'width': 108}, {'height': 191, 'url': 'https://external-preview.redd.it/8tXAk6kCAEoZSiUbRx-IB2ZAv52vXl--tE5wBSn_5V8.jpg?width=216&crop=smart&auto=webp&s=f554d6c317b85790aa1b6aa2be74b52dcd547744', 'width': 216}, {'height': 283, 'url': 'https://external-preview.redd.it/8tXAk6kCAEoZSiUbRx-IB2ZAv52vXl--tE5wBSn_5V8.jpg?width=320&crop=smart&auto=webp&s=428655f9db39dc0959462cd3c40b573f17142ba8', 'width': 320}], 'source': {'height': 354, 'url': 'https://external-preview.redd.it/8tXAk6kCAEoZSiUbRx-IB2ZAv52vXl--tE5wBSn_5V8.jpg?auto=webp&s=ad5acc77af84f91c5232c61936ca0a152730b301', 'width': 400}, 'variants': {}}]}
Understand Q*
1
I've been hearing Q* = Q-learning + A* (search algorithm). Trying to make some sense of it, so let me know what I missed or got wrong here's what I know: It's supposed to improving language model decoding. 1. Q-learning is a form of model-free reinforcement learning where an agent learns to maximize a cumulative reward. When applied to language models, the actions could be the selection of tokens, with the reward being the effectiveness of the generated response. 2. A* is an informed search algorithm, or a best-first search, which uses heuristics to estimate the best path to the goal. In language generation, the goal could be the most coherent and contextually relevant completion (chat response). - Beam Search in Decoding: This method is used in LLMs, looks at a set of possible next sequences instead of just the single most likely next token. In a hypothetical Q* approach: - Informed Token Selection: It could use heuristics, based on context and language understanding, to guide the selection of token sequences. - Maximizing Future Reward: Like Q-learning, it would aim to maximize a future reward, potentially based on coherence, relevance, or user engagement with the generated text. - Beyond Simple Probability Multiplication: Rather than merely multiplying probabilities of token sequences, it could evaluate sequences based on a combined heuristic and reward-based framework. In theory this could lead to more effective, contextually relevant text generation, especially in scenarios that require a balance between creativity and specific guidelines or objectives.
2023-11-24T17:02:27
https://www.reddit.com/r/LocalLLaMA/comments/182wom5/understand_q/
tmacnadidas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
182wom5
false
null
t3_182wom5
/r/LocalLLaMA/comments/182wom5/understand_q/
false
false
self
1
null
Hi! Looking for LLaMA experts for project to use it for education as co-worker for the teacher
2
Hi LocalLLaMa. Me and my wife had an amazing idea about how to use LLaMa for education and realized searching for it after that chatgpt is trying the same, but we aknowledge that we are able to surpass the problems they elect as the core issues in doing that. Someone who likes artificial inteligence and Vigotski to talk about and join us in the project? Chatgpt news: [https://knowledge.insead.edu/leadership-organisations/chatgpt-and-future-business-education](https://knowledge.insead.edu/leadership-organisations/chatgpt-and-future-business-education) &#x200B;
2023-11-24T16:52:03
https://www.reddit.com/r/LocalLLaMA/comments/182wg2c/hi_looking_for_llama_experts_for_project_to_use/
sublingualwart
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
182wg2c
false
null
t3_182wg2c
/r/LocalLLaMA/comments/182wg2c/hi_looking_for_llama_experts_for_project_to_use/
false
false
self
2
{'enabled': False, 'images': [{'id': 'JqnKbnMakIia8tsBDOBaEFoGpQ-K8FcMFYDkvrrWr-A', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/0fvIw8HsFhADb7XjP9jdg4mo8fgfdwC7c8Bab2EKfeE.jpg?width=108&crop=smart&auto=webp&s=c22d644cfaedfa2a959da7ab3c53d58ed708208f', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/0fvIw8HsFhADb7XjP9jdg4mo8fgfdwC7c8Bab2EKfeE.jpg?width=216&crop=smart&auto=webp&s=ffcd13424490250d123cc5854d41b0c3811572ef', 'width': 216}, {'height': 212, 'url': 'https://external-preview.redd.it/0fvIw8HsFhADb7XjP9jdg4mo8fgfdwC7c8Bab2EKfeE.jpg?width=320&crop=smart&auto=webp&s=ae071e8a9f9938a501e24edabea20bf52d7a3592', 'width': 320}, {'height': 424, 'url': 'https://external-preview.redd.it/0fvIw8HsFhADb7XjP9jdg4mo8fgfdwC7c8Bab2EKfeE.jpg?width=640&crop=smart&auto=webp&s=870d81d398f32aa9ad13d9fb555155a2153ff2c4', 'width': 640}, {'height': 636, 'url': 'https://external-preview.redd.it/0fvIw8HsFhADb7XjP9jdg4mo8fgfdwC7c8Bab2EKfeE.jpg?width=960&crop=smart&auto=webp&s=55c7c3a4f98bb5eae0520a6553b4b7d753b4f2a5', 'width': 960}], 'source': {'height': 663, 'url': 'https://external-preview.redd.it/0fvIw8HsFhADb7XjP9jdg4mo8fgfdwC7c8Bab2EKfeE.jpg?auto=webp&s=a4569449ec46df83966a056327fe0459e42129c3', 'width': 1000}, 'variants': {}}]}
Will We Ever Outgrow the Need for Fine-Tuned Models in AI?
1
[removed]
2023-11-24T16:44:23
https://www.reddit.com/r/LocalLLaMA/comments/182wa48/will_we_ever_outgrow_the_need_for_finetuned/
EcstaticVenom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
182wa48
false
null
t3_182wa48
/r/LocalLLaMA/comments/182wa48/will_we_ever_outgrow_the_need_for_finetuned/
false
false
self
1
null