title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
API vs Local setup for research? | 5 | Hi guys, been lurking for a couple weeks now, and genuinely really impressed with all the happenings in this sub even though I understand maybe a 1/4 of everything I read - like how ERP def is about enterprise resource planing.
Right now I’m working on a project to model the degree of online parasocial relationships between content creators and their audience. You think of it as like a “fandom” score. Not to bore you with too much details, but I’m using a OpenAI api to get GPT3.5 to go through my dataset (YouTube comments) and label “yes” “no” based on different prompts.
For now, the most frustrating part is prompt engineering, GPT3.5 is not the most consistent, but I can foresee costs being the most limiting factor as I want to increase my dataset overtime. Right now it’s not too bad, 75k comments + prompts, ends up being $150-200 for all the labels.
But as I scale the project, I’m wondering if it’s better to locally host an LLM? While my shitbox laptop won’t be able to handle it, I’m using paperspace right now, and was wondering if it would be viable?
To compete with the quality of GPT3.5, I was thinking of using QLoRA to fine tune a quantized model towards analyzing YT comments, looking specifically for nuances in language relevant to the project.
Would this be a viable approach? Or is this overkill? Is it even the right approach?
What are your experiences with some of these models? I’m using Paperspace Gradient Pro, their best two machines at this tier seems to be the A4000 w/ 16gb GPU & 45gb memory
P6000 w/ 24gb GPU & 30gb memory
I’m down with upgrading the plan to Growth, which costs about $39/mo, but if I can host the model and run everything, it would wind up being cheaper than paying for tokens?
**TL;DR:** noob social researcher wondering if it’s cheaper/more efficient to fine-tune & host a quantized model via cloud service (paperspace) or pay OpenAI. | 2024-01-08T09:58:20 | https://www.reddit.com/r/LocalLLaMA/comments/191hrjq/api_vs_local_setup_for_research/ | GeeBrain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191hrjq | false | null | t3_191hrjq | /r/LocalLLaMA/comments/191hrjq/api_vs_local_setup_for_research/ | false | false | self | 5 | null |
Model recommendation - Browser copilot | 1 | Hello
I'm looking for a model ( 13\~b ) that would be a good fit for a browser extension that we want to use to predict what text may come after.
An example usage would be writing this exact same reddit post, the model would get some context on the active tab, the current input text ( before and after the current cursor position ) with a limit, and it should predict what text may come after, so you can use tab to auto fill it. ( Similar to github copilot )
Does anyone has any recommendation for a model that would be a good fit for this? | 2024-01-08T09:47:02 | https://www.reddit.com/r/LocalLLaMA/comments/191hluc/model_recommendation_browser_copilot/ | ithanlara1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191hluc | false | null | t3_191hluc | /r/LocalLLaMA/comments/191hluc/model_recommendation_browser_copilot/ | false | false | self | 1 | null |
Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models. LATS achieves 94.4% for programming on HumanEval with GPT-4 and an average score of 75.9 for web browsing on WebShop with GPT-3.5, demonstrating the effectiveness and generality of this method. | 1 | [removed] | 2024-01-08T09:42:57 | TysonUsykFury | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 191hjq6 | false | null | t3_191hjq6 | /r/LocalLLaMA/comments/191hjq6/language_agent_tree_search_unifies_reasoning/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'w796KE4rqVxZkloaBGAx-IwUnYgunggwErNdZQ3cSfU', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/lsik91e2t6bc1.png?width=108&crop=smart&auto=webp&s=ceaec92d9cea146f3e5a98743df692ec8489eea0', 'width': 108}, {'height': 325, 'url': 'https://preview.redd.it/lsik91e2t6bc1.png?width=216&crop=smart&auto=webp&s=683802fa5c023d62251a1970626a95bc478da6e4', 'width': 216}, {'height': 482, 'url': 'https://preview.redd.it/lsik91e2t6bc1.png?width=320&crop=smart&auto=webp&s=f2357e8c90f71cfd35c70860cbdd174344a0da4d', 'width': 320}, {'height': 964, 'url': 'https://preview.redd.it/lsik91e2t6bc1.png?width=640&crop=smart&auto=webp&s=687ed07edb03ecec2dd4e8c2796fd74f21b2700a', 'width': 640}, {'height': 1447, 'url': 'https://preview.redd.it/lsik91e2t6bc1.png?width=960&crop=smart&auto=webp&s=f878d914c7e5b705bc9f1c4b8203657fd4a1faaf', 'width': 960}, {'height': 1628, 'url': 'https://preview.redd.it/lsik91e2t6bc1.png?width=1080&crop=smart&auto=webp&s=76885bad05b3a807d9ffc622c7b25ee078a735a6', 'width': 1080}], 'source': {'height': 1628, 'url': 'https://preview.redd.it/lsik91e2t6bc1.png?auto=webp&s=950b6938a7b2f74e7510359d9b940d1b638e94d7', 'width': 1080}, 'variants': {}}]} | ||
EC2 Instance, SageMaker or something better ? | 1 | [removed] | 2024-01-08T09:03:25 | https://www.reddit.com/r/LocalLLaMA/comments/191gz64/ec2_instance_sagemaker_or_something_better/ | Ornery_Researcher_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191gz64 | false | null | t3_191gz64 | /r/LocalLLaMA/comments/191gz64/ec2_instance_sagemaker_or_something_better/ | false | false | self | 1 | null |
Best models for adult novels that can be run on Apple Silicon? | 1 | What are the best uncensored Apple-compatible models for spicy novels?
Thanks | 2024-01-08T08:46:38 | https://www.reddit.com/r/LocalLLaMA/comments/191gqjh/best_models_for_adult_novels_that_can_be_run_on/ | Cloudy-And-Sunny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191gqjh | false | null | t3_191gqjh | /r/LocalLLaMA/comments/191gqjh/best_models_for_adult_novels_that_can_be_run_on/ | false | false | self | 1 | null |
Llava on CPU? | 1 | Has anyone run Llava ([https://github.com/haotian-liu/LLaVA](https://github.com/haotian-liu/LLaVA)) model on CPU's without installing Intel Extention for Pytorch environment for inference? Any pointer will be really helpful | 2024-01-08T08:37:52 | https://www.reddit.com/r/LocalLLaMA/comments/191gm2q/llava_on_cpu/ | MLer-India | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191gm2q | false | null | t3_191gm2q | /r/LocalLLaMA/comments/191gm2q/llava_on_cpu/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '6U37qjIXKQOAA8I73EjyXx_OErjVSduKDlTPrIoBlWE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bH8ol9o4wom9RTIxdUSBuPHFCAzKRrshwnNhdZHMMJ4.jpg?width=108&crop=smart&auto=webp&s=6f55623724c5e41a5e8f5c095e53e9b8705e851d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bH8ol9o4wom9RTIxdUSBuPHFCAzKRrshwnNhdZHMMJ4.jpg?width=216&crop=smart&auto=webp&s=0ddbd346084d8eb39bc1637e317d2e888130efe1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bH8ol9o4wom9RTIxdUSBuPHFCAzKRrshwnNhdZHMMJ4.jpg?width=320&crop=smart&auto=webp&s=89ac000eab366320a5022fe69e936885c8e6debb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bH8ol9o4wom9RTIxdUSBuPHFCAzKRrshwnNhdZHMMJ4.jpg?width=640&crop=smart&auto=webp&s=c7a82a29e029517c843877a4114980ab632ae477', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bH8ol9o4wom9RTIxdUSBuPHFCAzKRrshwnNhdZHMMJ4.jpg?width=960&crop=smart&auto=webp&s=5377c0d166e1444b6d0ed19e8e7a51ef7ae14ce6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bH8ol9o4wom9RTIxdUSBuPHFCAzKRrshwnNhdZHMMJ4.jpg?width=1080&crop=smart&auto=webp&s=82896f059f03baf5c4da908edb9a99f02a08f45c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bH8ol9o4wom9RTIxdUSBuPHFCAzKRrshwnNhdZHMMJ4.jpg?auto=webp&s=d34ac3e10edb447dbfb7fa95df776c7727629364', 'width': 1200}, 'variants': {}}]} |
Fastest inference library for prod environment? | 12 | I'd like to deploy an open source models for a production use case. Users will be directly interfacing with the model so I need to minimize latency. I know vLLM is one option but are there any other libraries that would offer better performance in terms of inference speed and latency? | 2024-01-08T07:43:07 | https://www.reddit.com/r/LocalLLaMA/comments/191fsfp/fastest_inference_library_for_prod_environment/ | MonkeyMaster64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191fsfp | false | null | t3_191fsfp | /r/LocalLLaMA/comments/191fsfp/fastest_inference_library_for_prod_environment/ | false | false | self | 12 | null |
Sam Altman when GPT store launches | 7 | 2024-01-08T07:40:19 | TysonUsykFury | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 191fqvc | false | null | t3_191fqvc | /r/LocalLLaMA/comments/191fqvc/sam_altman_when_gpt_store_launches/ | false | false | 7 | {'enabled': True, 'images': [{'id': 'NdP4YW_XtjTu31Y53jYT7nUVU9wRM9dyHlRvMGZq4EM', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/7mg1agp676bc1.jpeg?width=108&crop=smart&auto=webp&s=133e76059ce4b877ecacb96239a3e2f652001b8a', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/7mg1agp676bc1.jpeg?width=216&crop=smart&auto=webp&s=8c4a741237135ca224a201bdf103686ecbfb33db', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/7mg1agp676bc1.jpeg?width=320&crop=smart&auto=webp&s=90529e0ea265984c28e81091f84ed1547c349fb5', 'width': 320}], 'source': {'height': 344, 'url': 'https://preview.redd.it/7mg1agp676bc1.jpeg?auto=webp&s=12e38b82be4bdc5da0d099e09be32fb0700b566b', 'width': 615}, 'variants': {}}]} | |||
Local TTS that can read a whole book | 1 | I use whisper a lot and it’s very good. However, there is word limit. Is there an app that can TTS a whole pdf or epub? | 2024-01-08T06:31:42 | https://www.reddit.com/r/LocalLLaMA/comments/191eo1b/local_tts_that_can_read_a_whole_book/ | IpadWriter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191eo1b | false | null | t3_191eo1b | /r/LocalLLaMA/comments/191eo1b/local_tts_that_can_read_a_whole_book/ | false | false | self | 1 | null |
Should SOTA performance also be measured by efficiency? If we don't know the size (parameter count) of either Gemini Ultra or GPT 4, can this not become an unfair comparison? Why is Google and OpenAI so scared of openness in testing? | 1 | 2024-01-08T06:30:05 | https://www.reddit.com/gallery/191en13 | TysonUsykFury | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 191en13 | false | null | t3_191en13 | /r/LocalLLaMA/comments/191en13/should_sota_performance_also_be_measured_by/ | false | false | 1 | null | ||
Multi language Translation Local GPT | 1 | Is there a local gpt that can translate a whole pdf or epub file? Tired of manually copying page after page into ChatGPT | 2024-01-08T06:28:01 | https://www.reddit.com/r/LocalLLaMA/comments/191elv2/multi_language_translation_local_gpt/ | IpadWriter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191elv2 | false | null | t3_191elv2 | /r/LocalLLaMA/comments/191elv2/multi_language_translation_local_gpt/ | false | false | self | 1 | null |
llm struggles to interpret the context in response synthesis | 1 | Suppose we have a bunch of documents embedded in the vector db. And the user query is asking for an article mentioning "not driving a truck." RAG is retrieving the right documents, as it retrieves the documents mentioning both "not driving a truck" and "not driving a car." gpt4 only uses the phrases from the documents mentioning "not driving a truck." I wonder if anyone has ever faced this issue before. Would love to hear how you solved it. | 2024-01-08T06:20:58 | https://www.reddit.com/r/LocalLLaMA/comments/191ehsf/llm_struggles_to_interpret_the_context_in/ | Lumpy-Breakfast3295 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191ehsf | false | null | t3_191ehsf | /r/LocalLLaMA/comments/191ehsf/llm_struggles_to_interpret_the_context_in/ | false | false | self | 1 | null |
Decent LLMs on a Pi5. | 1 | [removed] | 2024-01-08T05:16:11 | https://www.reddit.com/r/LocalLLaMA/comments/191dczs/decent_llms_on_a_pi5/ | Sad_Preference2457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191dczs | false | null | t3_191dczs | /r/LocalLLaMA/comments/191dczs/decent_llms_on_a_pi5/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zAtW-iYarik5cBzFtk4y-cvIi7TsbZHVI2v20SZr5Qo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/zZovr0aPf7jGFN1QpgwvYGi6pTkunYUWBoT3MpOkKKQ.jpg?width=108&crop=smart&auto=webp&s=c9a2a403df7fc9c87c718581a7dfdb8f3ef2ac71', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/zZovr0aPf7jGFN1QpgwvYGi6pTkunYUWBoT3MpOkKKQ.jpg?width=216&crop=smart&auto=webp&s=e1cd4138a8e22a12ffad0021a9dd139792dae4d2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/zZovr0aPf7jGFN1QpgwvYGi6pTkunYUWBoT3MpOkKKQ.jpg?width=320&crop=smart&auto=webp&s=37dc4292a902762a65de9b1fe13681aeec775860', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/zZovr0aPf7jGFN1QpgwvYGi6pTkunYUWBoT3MpOkKKQ.jpg?auto=webp&s=506f306e394466bad5702ba4927c96182a38b4af', 'width': 480}, 'variants': {}}]} |
Does anyone know a good model to make NSFW characters? | 1 | [removed] | 2024-01-08T04:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/191d0cw/does_anyone_know_a_good_model_to_make_nsfw/ | bimibio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191d0cw | false | null | t3_191d0cw | /r/LocalLLaMA/comments/191d0cw/does_anyone_know_a_good_model_to_make_nsfw/ | false | false | nsfw | 1 | null |
Confused about using custom formatting in Axolotl | 3 | Hi, is there anyone who uses Axolotl and knows how to properly structure the custom user prompt?
My dataset is in the json format and contains multi-turn conversations:
```
[
{
"Chat": [
{"Human":"Question 1"},
{"Assistant":"Answer to question 1"},
{"Human":"Follow-up to question 1"},
{"Assistant":"Answer to the question 1 follow-up"},
((more turns))
{"System":"System message"}
]
},
{
"Chat": [
{"Human":"Question 2"},
{"Assistant":"Answer to question 2"},
{"Human":"Follow-up to question 2"},
{"Assistant":"Answer to the question 2 follow-up"},
((more turns))
{"System":"System message"}
]
}
]
```
If I'm understanding it correctly (I'm probably not), simply using the defaults would train the model with the turns being "separated" from each other, resulting in the base model being confused about the follow-up questions without having the context about the initial questions. While I want to format it similarly to this:
```
System message
### Human: Question 1
### Assistant: Answer to question 1
### Human: Follow-up to question 1
### Assistant: Answer to the question 1 follow-up
```
How would I configure the .yaml file to respect this format? Or is there a more graceful way to achieve my task? Sorry for the noob question, and thank you. | 2024-01-08T04:39:58 | https://www.reddit.com/r/LocalLLaMA/comments/191copo/confused_about_using_custom_formatting_in_axolotl/ | ArakiSatoshi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191copo | false | null | t3_191copo | /r/LocalLLaMA/comments/191copo/confused_about_using_custom_formatting_in_axolotl/ | false | false | self | 3 | null |
Mixtral Speed Question | 4 | So I am testing Mixtral 8x7b Instruct utilizing thebloke's Q5 Quant.
I am using it in tests with a corporate HR Chatbot, currently the client uses Claude. The way their bot works, the entire HR manual is in the context, and the agent just answers from the context.
Context size is just under 20k Tokens.
Currently I am testing utilizing the character card and the instruct-chat in Ooobabooga.
We have tested on two different runpod setups, the first was an H100 80GB and the 2nd was 4x RTX4090 96GB.
On the H100 the first question took ~2.5min to respond. After that QA was nearly instantaneous.
The RTX setup was north of 4 minutes when I pulled the plug.
In both cases, ram/cpu useage was like 1%, VRAM USEAGE was under 60%, and GPU Compute was under 60%.
Why does it take so long to digest the context the first time?
Is there any way to speed it up?
Should I be testing with the API instead, and am I missing something stupid here lol..
Thanks! | 2024-01-08T04:22:43 | https://www.reddit.com/r/LocalLLaMA/comments/191cd0f/mixtral_speed_question/ | aallsbury | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191cd0f | false | null | t3_191cd0f | /r/LocalLLaMA/comments/191cd0f/mixtral_speed_question/ | false | false | self | 4 | null |
Non-enterprise question: 4000 super series? | 1 | I gotta get a new computer. How do you guys think the new 4000 super series is shaping up? I’m reading the 4070ti has an upgraded bus and more vram.
Most recent posts have been talking 120b+ or mixtral, so I dunno if we’re talking about consumer stuff anymore :-) | 2024-01-08T04:22:35 | https://www.reddit.com/r/LocalLLaMA/comments/191ccwx/nonenterprise_question_4000_super_series/ | hmmqzaz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191ccwx | false | null | t3_191ccwx | /r/LocalLLaMA/comments/191ccwx/nonenterprise_question_4000_super_series/ | false | false | self | 1 | null |
What is Apple thinking? Why are they so radio silent about LLMs despite having the edge over Microsoft/Google to bring local LLMs to iDevices? | 50 | (detail in the comments) | 2024-01-08T04:21:06 | https://www.reddit.com/r/LocalLLaMA/comments/191cbwh/what_is_apple_thinking_why_are_they_so_radio/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191cbwh | false | null | t3_191cbwh | /r/LocalLLaMA/comments/191cbwh/what_is_apple_thinking_why_are_they_so_radio/ | false | false | self | 50 | null |
Currently working on building a Worldbuild+RP dataset | 50 | So I scraped all the worldbuilding stackexchange Q's, with 5+ rep, which left me with 18000 Q's. Its going to be a good resource for world-building and world details, which would probably augment RP.
The Raw release of the 18000 Q's is out here:
https://huggingface.co/datasets/VatsaDev/worldbuild
This is Phase one, theres still several things to do, including
- Need good roleplay dataset recommendations to add to this, RN just looking at Pippa
- More worldbuild resources: dont really know that many beyond this subreddit, would love to add more.
- Synth data? An OH-2 with RAG could probably give more data in a chat format, though this probs wont be nesc. with everything else
Dataset Format
[Question] ... [Answer] ... [Answer] ... [Answer] etc, moves to next example, all the answers sorted by best votes. | 2024-01-08T03:55:51 | https://www.reddit.com/r/LocalLLaMA/comments/191bufg/currently_working_on_building_a_worldbuildrp/ | vatsadev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191bufg | false | null | t3_191bufg | /r/LocalLLaMA/comments/191bufg/currently_working_on_building_a_worldbuildrp/ | false | false | self | 50 | {'enabled': False, 'images': [{'id': 'vBihn1wz8RSBZNXbn0zgmsx8oVKVjL3krCRZbMWEdeQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gaXcNSW79WamwnRdc6rFlvbSOU8gYg1u0YReYiej-0o.jpg?width=108&crop=smart&auto=webp&s=62e0fcf4304aee343b4a1ecb9d261177d5a08c0f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gaXcNSW79WamwnRdc6rFlvbSOU8gYg1u0YReYiej-0o.jpg?width=216&crop=smart&auto=webp&s=4a9810f402befef905425e0e84cf10afe5df1b8b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gaXcNSW79WamwnRdc6rFlvbSOU8gYg1u0YReYiej-0o.jpg?width=320&crop=smart&auto=webp&s=1df820961954b8c746e1c0839acdd827e051c8c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gaXcNSW79WamwnRdc6rFlvbSOU8gYg1u0YReYiej-0o.jpg?width=640&crop=smart&auto=webp&s=116f959af8e3144a1acf2f5465ebbc945c82c2fc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gaXcNSW79WamwnRdc6rFlvbSOU8gYg1u0YReYiej-0o.jpg?width=960&crop=smart&auto=webp&s=cccbef4c483a22968c4cac502164c81d78a60f18', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gaXcNSW79WamwnRdc6rFlvbSOU8gYg1u0YReYiej-0o.jpg?width=1080&crop=smart&auto=webp&s=f83c5d491c9da2a318f3152e39500f60ea4380bc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gaXcNSW79WamwnRdc6rFlvbSOU8gYg1u0YReYiej-0o.jpg?auto=webp&s=5aa6de2e91dffb99955191b2f8bb23d738322626', 'width': 1200}, 'variants': {}}]} |
The hidden cost of all mostly data driven algorithmic efficiencies during pretraining the we are seeing in smaller models when Open source release their models or AI researchers publish their results is lack of larger models......... | 1 | If I was a big company and wanted to train a SOTA model,
Rather than training a new larger model now to compete with or beat GPT 4,
I would just wait for new information to come out that would make my model better during pretraining.
The problem is that this information is just accelerating, every week new information is coming out which improves pretraining slightly.
This may help smaller models as pretraining costs are relatively cheap but still expensive but the risk of pretraining a large model and missing out on these efficeny gains have paralyzed companies from pretraining large models which can compete with GPT 4.
If you spends hundreds of millions of dollars training your model and then your competition does the same 2-3 months later and gets a boost, you feel your money is wasted.
This we are stuck with GPT 4 as SOTA since March of 2023.
Ironically, the real acceleration both in open source and closed source will happen once a new SOTA is released.
Even here, I have seen some people claiming the scaling is dead but scaling is one of the best way we know how to push SOTA further.
A 10 trillion parameter model will always be better than a trillion parameter model trained the same way.
Hopefully computing power increase due to new Nvidia chips unfreezes the SOTA. | 2024-01-08T03:31:55 | https://www.reddit.com/r/LocalLLaMA/comments/191bdqm/the_hidden_cost_of_all_mostly_data_driven/ | TysonUsykFury | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191bdqm | false | null | t3_191bdqm | /r/LocalLLaMA/comments/191bdqm/the_hidden_cost_of_all_mostly_data_driven/ | false | false | self | 1 | null |
alts – 100% free, local & offline voice assistant with speech recognition | 1 | 2024-01-08T03:03:17 | https://github.com/alxpez/alts | alxpez | github.com | 1970-01-01T00:00:00 | 0 | {} | 191at7i | false | null | t3_191at7i | /r/LocalLLaMA/comments/191at7i/alts_100_free_local_offline_voice_assistant_with/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'gfcNuCwl_A0rp-wXcPD6L-AUPD835kmMKr_iZQ9YOZQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/iYtaWN_tUsnKrzbXA3HOVvEzotGD-qBdIGUgmrlULrI.jpg?width=108&crop=smart&auto=webp&s=53e6f524c83dca67e47c81a7101d93162c303bf8', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/iYtaWN_tUsnKrzbXA3HOVvEzotGD-qBdIGUgmrlULrI.jpg?width=216&crop=smart&auto=webp&s=af90bf5da2d02269ad4d6534e216ead18efce58b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/iYtaWN_tUsnKrzbXA3HOVvEzotGD-qBdIGUgmrlULrI.jpg?width=320&crop=smart&auto=webp&s=30989c06082b4cf8f2fcee5f5c896de72fc7ca95', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/iYtaWN_tUsnKrzbXA3HOVvEzotGD-qBdIGUgmrlULrI.jpg?width=640&crop=smart&auto=webp&s=2c37f3b8ad1bf28832d7e24e395ac7225261f351', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/iYtaWN_tUsnKrzbXA3HOVvEzotGD-qBdIGUgmrlULrI.jpg?width=960&crop=smart&auto=webp&s=17a98cec4f5b0f937909591f36c3a7bf250a19bd', 'width': 960}], 'source': {'height': 1010, 'url': 'https://external-preview.redd.it/iYtaWN_tUsnKrzbXA3HOVvEzotGD-qBdIGUgmrlULrI.jpg?auto=webp&s=10d8824672e9dacf18f17da3100949caad039c29', 'width': 1010}, 'variants': {}}]} | ||
Is the "max RAM required" gpu ram or system ram? Can I just run on system ram? | 1 | [removed] | 2024-01-08T03:02:19 | https://www.reddit.com/r/LocalLLaMA/comments/191asid/is_the_max_ram_required_gpu_ram_or_system_ram_can/ | Original_Job6327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191asid | false | null | t3_191asid | /r/LocalLLaMA/comments/191asid/is_the_max_ram_required_gpu_ram_or_system_ram_can/ | false | false | 1 | null | |
P40 or rtx 3060 ? | 8 | Hello everyone I am building my first small llm workstation. I will probably need it for a project I am currently playing around and if I get lucky I will get to write my dr. About and have accesse to alot more horsepower. But that might take a while.
So until then I want to have my own system that can run some basic inference. Currently I am torn between the p40 for the 24gb vram and yes I do have above 4g encoding and rebar support and the 3060 because it is just easy to use.
Cpu will be a ryzen 3600 for now
And 32 gb 3200mhz memory
And currently a 550w power supply (i can change it though)
I want to extract relationships between entities in text specifically genes in biological papers.
I tried around with mistral 7b and It is doing fine on that task bit it could be better.
The solar models seem very interesting and the new 11bx2 moe that just got tested also looks good. And now the question becomes which one do I buy what models can I run on the p40 that I cant on the rtx 3060. Besides that 11bx2 model ?
How big is the performance delta between them ?
Can I sensibly run 2 7b models next to each other on the p40 ?
I would love to get some opinions :) | 2024-01-08T02:19:06 | https://www.reddit.com/r/LocalLLaMA/comments/1919wc9/p40_or_rtx_3060/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1919wc9 | false | null | t3_1919wc9 | /r/LocalLLaMA/comments/1919wc9/p40_or_rtx_3060/ | false | false | self | 8 | null |
Innovative Approach to Enhance LLMs: Specialized 1B Model Integration into a 70B Model | 20 | Given the significant computational demands and complexities involved in training immense models (like those requiring A100/H100 GPUs), I started thinking about a more resource-efficient strategy. My idea revolves around initially developing a specialized 1B-parameter model in a narrowly defined domain so that my RTX3090 can be work. The goal is to ensure that this smaller model achieves exceptional expertise and understanding within its specific field.
Once this 1B model demonstrates robust performance in its domain, the next step would be to integrate it into a larger, 70B-parameter model. This model fusion technique aims to augment the larger model's capabilities, particularly in the domain where the 1B model excels.
As more 1b models are integrated into the big model, the big model will become more and more capable. | 2024-01-08T02:10:15 | https://www.reddit.com/r/LocalLLaMA/comments/1919pg4/innovative_approach_to_enhance_llms_specialized/ | Own_Relationship8953 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1919pg4 | false | null | t3_1919pg4 | /r/LocalLLaMA/comments/1919pg4/innovative_approach_to_enhance_llms_specialized/ | false | false | self | 20 | null |
AI to Analyze Logbook Realtime (not GenAI) | 1 | Hi, I know this may not be the right forum as the topic is now directly related to Generative AI. Any guidance appreciated!
I am investing some effort to plug in AI/LLM with Home Assistant (smart home self hosted open source solution). My initial goal is to make available the Logbook and other data generated by HA to a model that is able to analyze it and get insights, extract information, ... I would like to set up a model that continuously does that as the events keep coming (not a one-shot analysis). Anyone have some insights to share? | 2024-01-08T01:44:08 | https://www.reddit.com/r/LocalLLaMA/comments/19195c0/ai_to_analyze_logbook_realtime_not_genai/ | cfurini | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19195c0 | false | null | t3_19195c0 | /r/LocalLLaMA/comments/19195c0/ai_to_analyze_logbook_realtime_not_genai/ | false | false | self | 1 | null |
I Ran Advanced LLMs on the Raspberry Pi 5! | 1 | 2024-01-08T01:42:43 | https://rb.gy/mc6dlp | tim_macgyver | rb.gy | 1970-01-01T00:00:00 | 0 | {} | 191948o | false | null | t3_191948o | /r/LocalLLaMA/comments/191948o/i_ran_advanced_llms_on_the_raspberry_pi_5/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'iAGKKvfraWCYFq8sA-9BxpVmV-jbiItHck_-cRL6N-Q', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/o0NtlsBlOLnV2A3grHb9O_UhPG3lh7mwQvAzhVP-jXc.jpg?width=108&crop=smart&auto=webp&s=7a11a22f756a5cdc28de62f5d43393f1de208f3d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/o0NtlsBlOLnV2A3grHb9O_UhPG3lh7mwQvAzhVP-jXc.jpg?width=216&crop=smart&auto=webp&s=fe44551ea4b408c0cdb9e6362b3f325280f2709c', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/o0NtlsBlOLnV2A3grHb9O_UhPG3lh7mwQvAzhVP-jXc.jpg?width=320&crop=smart&auto=webp&s=affc3955bdd58a03e18a80a85e2c52b5ebe020cc', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/o0NtlsBlOLnV2A3grHb9O_UhPG3lh7mwQvAzhVP-jXc.jpg?width=640&crop=smart&auto=webp&s=6635928341ebe6712fbde60ba379a13bc64accbd', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/o0NtlsBlOLnV2A3grHb9O_UhPG3lh7mwQvAzhVP-jXc.jpg?width=960&crop=smart&auto=webp&s=c772eb29844782aa44c79e30ea6ba2353699efb1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/o0NtlsBlOLnV2A3grHb9O_UhPG3lh7mwQvAzhVP-jXc.jpg?width=1080&crop=smart&auto=webp&s=e3efac022d18ea7da4d547a94e6fa4c28a4e00e9', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/o0NtlsBlOLnV2A3grHb9O_UhPG3lh7mwQvAzhVP-jXc.jpg?auto=webp&s=cc55c01274028eed3616a1175a79820970030c65', 'width': 1280}, 'variants': {}}]} | ||
NVIDIA RTX 5880 48GB GDDR6 Ada Generation Graphics Card | 1 | It's coming.
The model parameters are getting smaller and the devices are getting bigger.
https://preview.redd.it/tm9bd1vs74bc1.png?width=868&format=png&auto=webp&s=efae675166b1f2e792a3974a4b07c76bbcc3c04c
[https://www.nvidia.com/en-us/design-visualization/rtx-5880/](https://www.nvidia.com/en-us/design-visualization/rtx-5880/) | 2024-01-08T01:04:33 | https://www.reddit.com/r/LocalLLaMA/comments/1918acu/nvidia_rtx_5880_48gb_gddr6_ada_generation/ | sapporonight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1918acu | false | null | t3_1918acu | /r/LocalLLaMA/comments/1918acu/nvidia_rtx_5880_48gb_gddr6_ada_generation/ | false | false | 1 | null | |
LLM that text messages like a teenager? | 1 | Has anyone tried any LLMs that can emulate text message conversations between teenagers? I’m specifically looking for shorter text output with little emphasis on grammar and punctuations. All models I’ve tried output larger text output. I’ve tried mentioning in prompt to never exceed ‘x’ number of words but it doesn’t work 100% of the time. | 2024-01-08T00:27:54 | https://www.reddit.com/r/LocalLLaMA/comments/1917gfm/llm_that_text_messages_like_a_teenager/ | RepresentativeOdd276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1917gfm | false | null | t3_1917gfm | /r/LocalLLaMA/comments/1917gfm/llm_that_text_messages_like_a_teenager/ | false | false | self | 1 | null |
Run an own model as personal assistant? | 5 | Hello everyone,
I love LLMs and what they can do but i don't want to give shady black box in a cloud all my private thoughts and data.
Is it possible to not only run a pretrained model locally but actually constantly fine tune / enhance this model?
I want a model that actually knows all my data and remembers all previous conversations. It should learn from our conversations and e.g. never make a mistake that i corrected again. It should also for example remember all persons i ever talked about.
Like i diary. But an LLM build around it to get actually valuable data from this diary.
Is that possible? If yes, what kind of hardware would i need? | 2024-01-08T00:23:12 | https://www.reddit.com/r/LocalLLaMA/comments/1917co5/run_an_own_model_as_personal_assistant/ | Some-Thoughts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1917co5 | false | null | t3_1917co5 | /r/LocalLLaMA/comments/1917co5/run_an_own_model_as_personal_assistant/ | false | false | self | 5 | null |
Chat templates after model merging using SLERP | 1 | Hi guys, I am new to model merging and am trying to merge some models to see how the newer ones turn out. I am planning to merge two mistral 7b finetunes with different chat templates (like OpenHermes-2.5 and Mistral-Instruct). What should I use as the chat template for the final model? Will either of the chat templates work? Also what are the best parameters to use when model merging. Thanks! | 2024-01-08T00:00:18 | https://www.reddit.com/r/LocalLLaMA/comments/1916thr/chat_templates_after_model_merging_using_slerp/ | WillingMood2319 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1916thr | false | null | t3_1916thr | /r/LocalLLaMA/comments/1916thr/chat_templates_after_model_merging_using_slerp/ | false | false | self | 1 | null |
Would you like to build a multilingual model? We present TaCo 🌮 🌮 (Translation-Assisted Chain-of-Thought Processes) method along with Alpaca-52K, Dolly-15K, and the Vicuña Benchmark datasets, available in 132 languages | 1 | [removed] | 2024-01-07T23:55:03 | https://www.reddit.com/r/LocalLLaMA/comments/1916ozv/would_you_like_to_build_a_multilingual_model_we/ | bibek_LLMs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1916ozv | false | null | t3_1916ozv | /r/LocalLLaMA/comments/1916ozv/would_you_like_to_build_a_multilingual_model_we/ | false | false | 1 | null | |
🐺🐦⬛ LLM Comparison/Test: Confirm Leaderboard? Big News! (SOLAR+Bagle+Mixtral/Yi) | 227 | The Hugging Face Leaderboard has been taken over by first SOLAR, then Bagel, and now some Yi-based (incorrectly) Mixtral-named models - and I'm doing my best to keep up with all that and provide additional evaluations as usual!
Will my tests confirm or refute their rankings? Spoiler: There's some big news ahead!
So without further ado, here are the tests and comparisons, and my updated ranking table (now with links to the posts where I tested the models, if it's not in this one):
## Models tested:
- **~~Mixtral~~ Yi MoE:**
- Mixtral_34Bx2_MoE_60B
- Mixtral_11Bx2_MoE_19B
- **Bagel:**
- bagel-34b-v0.2
- bagel-dpo-34b-v0.2
- nontoxic-bagel-34b-v0.2
- bagel-8x7b-v0.2
- **SOLAR:**
- Nous-Hermes-2-SOLAR-10.7B
- Sakura-SOLAR-Instruct
- SauerkrautLM-SOLAR-Instruct
- SauerkrautLM-UNA-SOLAR-Instruct
- SOLAR-10.7B-Instruct-v1.0
- SOLARC-M-10.7B
- SOLARC-MOE-10.7Bx4
- SOLARC-MOE-10.7Bx6
- UNA-SOLAR-10.7B-Instruct-v1.0
## Testing methodology
- **4 German data protection trainings:**
- I run models through **4** professional German online data protection trainings/exams - the same that our employees have to pass as well.
- The test data and questions as well as all instructions are in German while the character card is in English. This **tests translation capabilities and cross-language understanding**.
- Before giving the information, I instruct the model (in German): *I'll give you some information. Take note of this, but only answer with "OK" as confirmation of your acknowledgment, nothing else.* This **tests instruction understanding and following capabilities**.
- After giving all the information about a topic, I give the model the exam question. It's a multiple choice (A/B/C) question, where the last one is the same as the first but with changed order and letters (X/Y/Z). Each test has 4-6 exam questions, for a total of **18** multiple choice questions.
- If the model gives a single letter response, I ask it to answer with more than just a single letter - and vice versa. If it fails to do so, I note that, but it doesn't affect its score as long as the initial answer is correct.
- I rank models according to how many correct answers they give, primarily after being given the curriculum information beforehand, and secondarily (as a tie-breaker) after answering blind without being given the information beforehand.
- All tests are separate units, context is cleared in between, there's no memory/state kept between sessions.
- [SillyTavern](https://github.com/SillyTavern/SillyTavern) frontend
- [oobabooga's text-generation-webui](https://github.com/oobabooga/text-generation-webui) backend
- **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons)
- Official prompt format as noted
## Detailed Test Reports
And here are the detailed notes, the basis of my ranking, and also additional comments and observations:
### ~~Mixtral~~ Yi MoE
- **[Mixtral_34Bx2_MoE_60B](https://huggingface.co/cloudyu/Mixtral_34Bx2_MoE_60B)** 4-bit+DoubleQuant+FlashAttention2, ~~200K~~ 4K context, Alpaca format:
- ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+3+4+6=17/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter consistently.
**YEAH!!** Finally a really good - great, even - top model again! Not perfect, but damn close. And that at just *double-quantized* 4-bit!
In fact, it even beat Mistral AI's own Mixtral-8x7B-Instruct-v0.1 - the only MoE model that was doing really well so far! So this is actually huge for the local LLM community, not just this one model in particular, but the method used to create the first community MoE that really rocks!
And if you're looking for a new model to try (and have the resources), this is the one! Just remember it's not a Mixtral variant despite its name, it's actually Yi-based, so it's best for English and Chinese language output (its writing in German and probably other languages isn't that good, which means for me personally, I'll probably keep using Mixtral mainly - for now).
But no matter if this model is your new main or not - what's most important about it is that it demonstrates that the community (and not just Mistral AI) can create properly working MoE models! No other community-created MoE did that well in my tests thus far. So hopefully the whole community can learn from this and we'll soon see more great MoE models, elevating our local LLM capabilities even further!
- **[Mixtral_11Bx2_MoE_19B](https://huggingface.co/cloudyu/Mixtral_11Bx2_MoE_19B)** ~~200K~~ 4K context, Alpaca format:
- ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+4+3+2=13/18**
- ❌ Did NOT follow instructions to acknowledge data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
Another community MoE that works! It wasn't as good as the 2x34B one, but hey, it's only 2x11B anyway, so that's to be expected. If you can't run the other, try this one!
### Bagel
- **[bagel-34b-v0.2](https://huggingface.co/jondurbin/bagel-34b-v0.2)** 4-bit, ~~200K~~ 4K context, Alpaca format:
- ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+2+4+6=16/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
Best Bagel in my tests. Only Bagel not to completely flub the third blind test, but made two mistakes in another test that the other non-MoE Bagels got right.
And look how well it did, even beat Mixtral-8x7B-Instruct-v0.1 (if just slightly) and flew ahead of many excellent 70B models and GPT-3.5.
- **[bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2)** 4-bit, ~~200K~~ 4K context, Alpaca format:
- ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+4+0+6=14/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter consistently.
Tied for second best Bagel in my tests with the "nontoxic" version. Flubbed one of the four blind tests completely, ignoring some of the questions while answering the others wrongly.
This is actually one of the two models that Mixtral_34Bx2_MoE_60B was created out of.
- **[nontoxic-bagel-34b-v0.2](https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2)** 4-bit, ~~200K~~ 4K context, Alpaca format:
- ✅ Gave correct answers to all **18/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+4+0+6=14/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter consistently.
Tied for second best Bagel in my tests with the DPO version. Flubbed one of the four blind tests completely as well, ignoring some of the questions while answering the others wrongly.
- **[bagel-8x7b-v0.2](https://huggingface.co/jondurbin/bagel-8x7b-v0.2)** ~~200K~~ 4K context, Alpaca format:
- ❌ Gave correct answers to only **4+2+0+0=6/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+2+0+4=10/18**
- ✅ Consistently acknowledged all data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- ❌ In two of the four tests, would only say "OK" to the questions instead of giving the answer
Damn, what happened here? While this model acknowledged all data input with OK, in half the normal tests it wouldn't even answer the questions, just acknowledge them as well. Only when thanked at the end of the tests would it respond normally again. And in the blind tests, it also exhibited severe logical problems, so all in all it simply didn't deliver.
And that despite - or more likely, because of - being a MoE model. I'd expect it to perform better, not worse, than the models it's made up of. So as that's clearly not the case here, it looks like the MoE merging didn't work out here, like with so many community-made MoE models.
But since Mixtral_34Bx2_MoE_60B and Mixtral_11Bx2_MoE_19B have shown that it's possible for others besides Mistral AI to make capable MoEs, and the non-MoE versions of Bagel prove that the base model is fine, there's hope for a fixed and improved Bagel MoE further down the line. (Ironically, Mixtral_34Bx2_MoE_60B uses Bagel as one of its two base models - so basically that's a Bagel MoE, too!)
### SOLAR
- **[SauerkrautLM-UNA-SOLAR-Instruct](https://huggingface.co/Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct)** 4K context, User-Assistant-Newlines format:
- ❌ Gave correct answers to only **4+3+4+6=17/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **3+4+3+5=15/18**
- ❌ Did NOT follow instructions to acknowledge data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
This is, together with UNA-SOLAR-10.7B-Instruct-v1.0, the best SOLAR variant I tested.
And, wow, a mere 11B model ahead of GPT-3.5 and Mistral AI's API models! Look how far we have come already. And if the higher ranked models are too resource-hungry for your system, try this one or one of its variants.
Only downside is 4K max native context. So you could scale it up, but that would probably reduce quality. Still, 4K is all we had for a while now, so at least you now get more quality out of it until the next big leap happens (which will probably be soon, considering the pace at which local AI advances).
- **[UNA-SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0)** 4K context, User-Assistant-Newlines format:
- ❌ Gave correct answers to only **4+3+4+6=17/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **3+4+3+5=15/18**
- ❌ Did NOT follow instructions to acknowledge data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
This is, together with SauerkrautLM-UNA-SOLAR-Instruct, the best SOLAR variant I tested.
- **[SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)** 4K context, User-Assistant-Newlines format:
- ❌ Gave correct answers to only **4+3+4+6=17/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **3+4+3+4=14/18**
- ❌ Did NOT follow instructions to acknowledge data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
The original SOLAR 10.7B Instruct. Did better than all the merges based on it, except for the two UNA variants above.
- **[SOLARC-M-10.7B](https://huggingface.co/DopeorNope/SOLARC-M-10.7B)** 4K context, User-Assistant-Newlines format:
- ❌ Gave correct answers to only **4+4+4+5=17/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **3+4+1+2=10/18**
- ❌ Did NOT follow instructions to acknowledge data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
- ➖ Responded in Dutch to some questions.
At the time of testing, this is the highest ranked SOLAR model on the HF leaderboard. In my normal tests, it did as well as the other best SOLARs, but in the blind runs, it was the worst. Interestingly, it got a perfect score in one of the tests where all the other SOLARs failed, but then got one question wrong that almost all the other SOLARs answered correctly.
- **[SauerkrautLM-SOLAR-Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct)** 4K context, User-Assistant-Newlines format:
- ❌ Gave correct answers to only **4+3+4+5=16/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **3+4+3+3=13/18**
- ❌ Did NOT follow instructions to acknowledge data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
This one falls a little off compared to the SOLARs listed above. Its UNA variant, on the other hand, is one of the two best SOLAR variants.
- **[Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)** 4K context, ChatML format:
- ❌ Gave correct answers to only **4+3+3+6=16/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **3+3+3+3=12/18**
- ❌ Did NOT follow instructions to acknowledge data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
When I see Nous or Hermes in a model's name, I always expect high quality. This wasn't bad, but not better than the other SOLAR variants, so it didn't stand out as much as Nous Hermes usually does.
- **[Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct)** 4K context, Orca-Hashes format:
- ❌ Gave correct answers to only **4+3+3+6=16/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **3+3+3+3=12/18**
- ❌ Did NOT follow instructions to acknowledge data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
The one SOLAR variant with a different prompt format. Not a bad model by itself, just as good as Nous Hermes 2 SOLAR, but other SOLAR variants (except the MoE version) are better.
- **[SOLARC-MOE-10.7Bx4](https://huggingface.co/DopeorNope/SOLARC-MOE-10.7Bx4)** 4-bit, 4K context, User-Assistant-Newlines format:
- ❌ Gave correct answers to only **4+2+4+6=16/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **3+3+0+6=12/18**
- ❌ Did NOT follow instructions to acknowledge data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
Ran much slower than expected: Unquantized, I only got 0.5 tokens per second on 2x 3090 (>90% load on once GPU and none on the other, with plenty of VRAM to spare, no shared system memory, up-to-date ooba's Transformers loader). And even at 4-bit quantization, I just got about 5 tokens per second. Just an issue on my end or a general problem of this model? Other than speed, the results weren't that great, so this looks like another failed attempt at producing a viable MoE model.
- **[SOLARC-MOE-10.7Bx6](https://huggingface.co/DopeorNope/SOLARC-MOE-10.7Bx6)** 4-bit, 4K context, User-Assistant-Newlines format:
- ❌ Gave correct answers to only **3+2+3+5=13/18** multiple choice questions! Just the questions, no previous information, gave correct answers: **4+4+2+4=14/18**
- ❌ Did NOT follow instructions to acknowledge data input with "OK".
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter.
Same as the other SOLAR MoE, too slow to be usable, so I've tested it at 4-bit. Results were worse than the other MoE and all the SOLARs, and the model getting a better score in the blind tests than the normal ones indicates something's wrong, as that means the information given to help answer the questions was confusing the model. In fact, I noticed a lot of confusion with this particular model, like stating the right answer but choosing the wrong letter. Another clear indicator that we're still far from mastering MoE merging.
## Updated Rankings
This is my objective ranking of these models based on measuring factually correct answers, instruction understanding and following, and multilingual abilities:
| Rank | Model | Size | Format | Quant | Context | Prompt | 1st Score | 2nd Score | OK | +/- |
| ---- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | ------ | ------- | ----------- | ------------------------ | --------- | --------- | --- | --- |
| 1 | [GPT-4](https://www.reddit.com/r/LocalLLaMA/comments/18yp9u4/llm_comparisontest_api_edition_gpt4_vs_gemini_vs/) | GPT-4 | API | | | | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ |
| 1 | [goliath-120b-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 120B | GGUF | Q2_K | 4K | Vicuna 1.1 | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ |
| 1 | [Tess-XL-v1.0-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 120B | GGUF | Q2_K | 4K | Synthia | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ |
| 1 | [Nous-Capybara-34B-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 34B | GGUF | Q4_0 | 16K | Vicuna 1.1 | 18/18 ✓ | 18/18 ✓ | ✓ | ✓ |
| 2 | [Venus-120b-v1.0](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 120B | EXL2 | 3.0bpw | 4K | Alpaca | 18/18 ✓ | 18/18 ✓ | ✓ | ✗ |
| 3 | [lzlv_70B-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 ✓ | 17/18 | ✓ | ✓ |
| 4 🆕 | [Mixtral_34Bx2_MoE_60B](https://huggingface.co/cloudyu/Mixtral_34Bx2_MoE_60B) | 2x34B | HF | 4-bit | ~~200K~~ 4K | Alpaca | 18/18 ✓ | 17/18 | ✓ | ✗ |
| 5 | [GPT-4 Turbo](https://www.reddit.com/r/LocalLLaMA/comments/18yp9u4/llm_comparisontest_api_edition_gpt4_vs_gemini_vs/) | GPT-4 | API | | | | 18/18 ✓ | 16/18 | ✓ | ✓ |
| 5 | [chronos007-70B-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 16/18 | ✓ | ✓ |
| 5 | [SynthIA-70B-v1.5-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 70B | GGUF | Q4_0 | 4K | SynthIA | 18/18 ✓ | 16/18 | ✓ | ✓ |
| 6 🆕 | [bagel-34b-v0.2](https://huggingface.co/jondurbin/bagel-34b-v0.2) | 34B | HF | 4-bit | ~~200K~~ 4K | Alpaca | 18/18 ✓ | 16/18 | ✓ | ✗ |
| 7 | [Mixtral-8x7B-Instruct-v0.1](https://www.reddit.com/r/LocalLLaMA/comments/18gz54r/llm_comparisontest_mixtral8x7b_mistral_decilm/) | 8x7B | HF | 4-bit | ~~32K~~ 4K | Mixtral | 18/18 ✓ | 16/18 | ✗ | ✓ |
| 8 | [dolphin-2_2-yi-34b-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 34B | GGUF | Q4_0 | 16K | ChatML | 18/18 ✓ | 15/18 | ✗ | ✗ |
| 9 | [StellarBright-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 ✓ | 14/18 | ✓ | ✓ |
| 10 | [Dawn-v2-70B-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 14/18 | ✓ | ✗ |
| 10 | [Euryale-1.3-L2-70B-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 14/18 | ✓ | ✗ |
| 10 🆕 | [bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) | 34B | HF | 4-bit | ~~200K~~ 4K | Alpaca | 18/18 ✓ | 14/18 | ✓ | ✗ |
| 10 🆕 | [nontoxic-bagel-34b-v0.2](https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2) | 34B | HF | 4-bit | ~~200K~~ 4K | Alpaca | 18/18 ✓ | 14/18 | ✓ | ✗ |
| 11 | [sophosynthesis-70b-v1](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 70B | EXL2 | 4.85bpw | 4K | Vicuna 1.1 | 18/18 ✓ | 13/18 | ✓ | ✓ |
| 12 🆕 | [Mixtral_11Bx2_MoE_19B](https://huggingface.co/cloudyu/Mixtral_11Bx2_MoE_19B) | 2x11B | HF | — | ~~200K~~ 4K | Alpaca | 18/18 ✓ | 13/18 | ✗ | ✗ |
| 13 | [GodziLLa2-70B-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 70B | GGUF | Q4_0 | 4K | Alpaca | 18/18 ✓ | 12/18 | ✓ | ✓ |
| 14 | [Samantha-1.11-70B-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 70B | GGUF | Q4_0 | 4K | Vicuna 1.1 | 18/18 ✓ | 10/18 | ✗ | ✗ |
| 15 | [Airoboros-L2-70B-3.1.2-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 70B | GGUF | Q4_K_M | 4K | Llama 2 Chat | 17/18 | 16/18 | ✓ | ✗ |
| 16 | [Gemini Pro](https://www.reddit.com/r/LocalLLaMA/comments/18yp9u4/llm_comparisontest_api_edition_gpt4_vs_gemini_vs/) | Gemini | API | | | | 17/18 | 16/18 | ✗ | ✗ |
| 17 🆕 | [SauerkrautLM-UNA-SOLAR-Instruct](https://huggingface.co/Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct) | 11B | HF | — | 4K | User-Ass.-Newlines | 17/18 | 15/18 | ✗ | ✗ |
| 17 🆕 | [UNA-SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0) | 11B | HF | — | 4K | User-Ass.-Newlines | 17/18 | 15/18 | ✗ | ✗ |
| 18 | [Rogue-Rose-103b-v0.2](https://www.reddit.com/r/LocalLLaMA/comments/18ft8f5/updated_llm_comparisontest_with_new_rp_model/) | 103B | EXL2 | 3.2bpw | 4K | Rogue Rose | 17/18 | 14/18 | ✗ | ✗ |
| 18 🆕 | [SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) | 11B | HF | — | 4K | User-Ass.-Newlines | 17/18 | 14/18 | ✗ | ✗ |
| 19 | [GPT-3.5 Turbo Instruct](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | GPT-3.5 | API | | | | 17/18 | 11/18 | ✗ | ✗ |
| 19 | [mistral-small](https://www.reddit.com/r/LocalLLaMA/comments/18yp9u4/llm_comparisontest_api_edition_gpt4_vs_gemini_vs/) | Mistral | API | | | | 17/18 | 11/18 | ✗ | ✗ |
| 20 🆕 | [SOLARC-M-10.7B](https://huggingface.co/DopeorNope/SOLARC-M-10.7B) | 11B | HF | — | 4K | User-Ass.-Newlines | 17/18 | 10/18 | ✗ | ✗ |
| 21 | [Synthia-MoE-v3-Mixtral-8x7B](https://www.reddit.com/r/LocalLLaMA/comments/18gz54r/llm_comparisontest_mixtral8x7b_mistral_decilm/) | 8x7B | HF | 4-bit | ~~32K~~ 4K | ~~Synthia~~ Llama 2 Chat | 17/18 | 9/18 | ✗ | ✗ |
| 22 | [dolphin-2.2-70B-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 70B | GGUF | Q4_0 | 4K | ChatML | 16/18 | 14/18 | ✗ | ✓ |
| 23 | [mistral-ft-optimized-1218](https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/) | 7B | HF | — | ~~32K~~ 8K | Alpaca | 16/18 | 13/18 | ✗ | ✓ |
| 24 🆕 | [SauerkrautLM-SOLAR-Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct) | 11B | HF | — | 4K | User-Ass.-Newlines | 16/18 | 13/18 | ✗ | ✗ |
| 24 | [OpenHermes-2.5-Mistral-7B](https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/) | 7B | HF | — | ~~32K~~ 8K | ChatML | 16/18 | 13/18 | ✗ | ✗ |
| 25 🆕 | [SOLARC-MOE-10.7Bx4](https://huggingface.co/DopeorNope/SOLARC-MOE-10.7Bx4) | 4x11B | HF | 4-bit | 4K | User-Ass.-Newlines | 16/18 | 12/18 | ✗ | ✗ |
| 25 🆕 | [Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) | 11B | HF | — | 4K | User-Ass.-Newlines | 16/18 | 12/18 | ✗ | ✗ |
| 25 🆕 | [Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) | 11B | HF | — | 4K | User-Ass.-Newlines | 16/18 | 12/18 | ✗ | ✗ |
| 25 | [Mistral-7B-Instruct-v0.2](https://www.reddit.com/r/LocalLLaMA/comments/18gz54r/llm_comparisontest_mixtral8x7b_mistral_decilm/) | 7B | HF | — | 32K | Mistral | 16/18 | 12/18 | ✗ | ✗ |
| 26 | [DeciLM-7B-instruct](https://www.reddit.com/r/LocalLLaMA/comments/18gz54r/llm_comparisontest_mixtral8x7b_mistral_decilm/) | 7B | HF | — | 32K | Mistral | 16/18 | 11/18 | ✗ | ✗ |
| 26 | [Marcoroni-7B-v3](https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/) | 7B | HF | — | ~~32K~~ 8K | Alpaca | 16/18 | 11/18 | ✗ | ✗ |
| 26 | [SauerkrautLM-7b-HerO](https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/) | 7B | HF | — | ~~32K~~ 8K | ChatML | 16/18 | 11/18 | ✗ | ✗ |
| 27 | [mistral-medium](https://www.reddit.com/r/LocalLLaMA/comments/18yp9u4/llm_comparisontest_api_edition_gpt4_vs_gemini_vs/) | Mistral | API | | | | 15/18 | 17/18 | ✗ | ✗ |
| 28 | [mistral-ft-optimized-1227](https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/) | 7B | HF | — | ~~32K~~ 8K | Alpaca | 15/18 | 14/18 | ✗ | ✓ |
| 29 | [GPT-3.5 Turbo](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | GPT-3.5 | API | | | | 15/18 | 14/18 | ✗ | ✗ |
| 30 | [dolphin-2.5-mixtral-8x7b](https://www.reddit.com/r/LocalLLaMA/comments/18gz54r/llm_comparisontest_mixtral8x7b_mistral_decilm/) | 8x7B | HF | 4-bit | ~~32K~~ 4K | ChatML | 15/18 | 13/18 | ✗ | ✓ |
| 31 | [Starling-LM-7B-alpha](https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/) | 7B | HF | — | 8K | OpenChat (GPT4 Correct) | 15/18 | 13/18 | ✗ | ✗ |
| 32 | [dolphin-2.6-mistral-7b-dpo](https://www.reddit.com/r/LocalLLaMA/comments/18w9hak/llm_comparisontest_brand_new_models_for_2024/) | 7B | HF | — | 16K | ChatML | 15/18 | 12/18 | ✗ | ✗ |
| 33 | [openchat-3.5-1210](https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/) | 7B | HF | — | 8K | OpenChat (GPT4 Correct) | 15/18 | 7/18 | ✗ | ✗ |
| 34 | [dolphin-2.7-mixtral-8x7b](https://www.reddit.com/r/LocalLLaMA/comments/18w9hak/llm_comparisontest_brand_new_models_for_2024/) | 8x7B | HF | 4-bit | 32K | ChatML | 15/18 | 6/18 | ✗ | ✗ |
| 35 | [dolphin-2.6-mixtral-8x7b](https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/) | 8x7B | HF | 4-bit | ~~32K~~ 16K | ChatML | 14/18 | 12/18 | ✗ | ✗ |
| 36 | [MixtralRPChat-ZLoss](https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/) | 8x7B | HF | 4-bit | ~~32K~~ 8K | CharGoddard | 14/18 | 10/18 | ✗ | ✗ |
| 37 🆕 | [SOLARC-MOE-10.7Bx6](https://huggingface.co/DopeorNope/SOLARC-MOE-10.7Bx6) | 6x11B | HF | 4-bit | 4K | User-Ass.-Newlines | 13/18 | 14/18 | ✗ | ✗ |
| 38 | [OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp](https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/) | 7B | HF | — | ~~32K~~ 8K | OpenChat (GPT4 Correct) | 13/18 | 13/18 | ✗ | ✗ |
| 39 | [dolphin-2.6-mistral-7b-dpo-laser](https://www.reddit.com/r/LocalLLaMA/comments/18w9hak/llm_comparisontest_brand_new_models_for_2024/) | 7B | HF | — | 16K | ChatML | 12/18 | 13/18 | ✗ | ✗ |
| 40 | [sonya-medium-x8-MoE](https://www.reddit.com/r/LocalLLaMA/comments/18w9hak/llm_comparisontest_brand_new_models_for_2024/) | 8x11B | HF | 4-bit | 8K | Alpaca | 12/18 | 10/18 | ✗ | ✗ |
| 41 | [dolphin-2.6-mistral-7b](https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/) | 7B | HF | — | ~~32K~~ 8K | ChatML | 10/18 | 10/18 | ✗ | ✗ |
| 42 | [SauerkrautLM-70B-v1-GGUF](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) | 70B | GGUF | Q4_0 | 4K | Llama 2 Chat | 9/18 | 15/18 | ✗ | ✗ |
| 43 🆕 | [bagel-8x7b-v0.2](https://huggingface.co/jondurbin/bagel-8x7b-v0.2) | 8x7B | HF | — | ~~200K~~ 4K | Alpaca | 6/18 | 10/18 | ✓ | ✗ |
| 44 | [mistral-tiny](https://www.reddit.com/r/LocalLLaMA/comments/18yp9u4/llm_comparisontest_api_edition_gpt4_vs_gemini_vs/) | Mistral | API | | | | 4/18 | 11/18 | ✗ | ✗ |
| 45 | [dolphin-2_6-phi-2](https://www.reddit.com/r/LocalLLaMA/comments/18w9hak/llm_comparisontest_brand_new_models_for_2024/) | 2.7B | HF | — | 2K | ChatML | 0/18 ✗ | 0/18 ✗ | ✗ | ✗ |
| 45 | [TinyLlama-1.1B-Chat-v1.0](https://www.reddit.com/r/LocalLLaMA/comments/18w9hak/llm_comparisontest_brand_new_models_for_2024/) | 1.1B | HF | — | 2K | Zephyr | 0/18 ✗ | 0/18 ✗ | ✗ | ✗ |
- 1st Score = Correct answers to multiple choice questions (after being given curriculum information)
- 2nd Score = Correct answers to multiple choice questions (without being given curriculum information beforehand)
- OK = Followed instructions to acknowledge all data input with just "OK" consistently
- +/- = Followed instructions to answer with just a single letter or more than just a single letter
## Conclusions
**SOLAR** is just a mere 11B model, but did better than GPT-3.5 and Mistral AI's API models in my tests! Shows how far we have come already with local AI, and if you don't have the resources for anything even better, just use it and enjoy what you have!
**Bagel** did even better than that, as it's a 34B and Yi-based - even beat Mixtral-8x7B-Instruct-v0.1 (if just slightly) and flew ahead of many excellent 70B models. It's also the base for one of the following MoE models.
**Mixtral_34Bx2_MoE_60B** (which should be more aptly named Yi- or SUS-Bagel MoE) is the big winner of this round of tests. Finally a great top model again, one that even beat Mistral AI's own Mixtral-8x7B-Instruct-v0.1 - the only MoE model that was doing really well so far.
That's why this is so huge for the local LLM community, not just this one model in particular, but the method used to create the first community MoE that really rocks. So hopefully the whole community can learn from this and we'll soon see more great MoE models, elevating our local LLM capabilities even further!
--------------------------------------------------------------------------------
Here's a list of my previous model tests and comparisons or other related posts:
- [LLM Comparison/Test: API Edition (GPT-4 vs. Gemini vs. Mistral vs. local LLMs)](https://www.reddit.com/r/LocalLLaMA/comments/18yp9u4/llm_comparisontest_api_edition_gpt4_vs_gemini_vs/) Winner: GPT-4
- [LLM Comparison/Test: Brand new models for 2024 (Dolphin 2.6/2.7 Mistral/Mixtral/Phi-2, Sonya, TinyLlama)](https://www.reddit.com/r/LocalLLaMA/comments/18w9hak/llm_comparisontest_brand_new_models_for_2024/) Winner: dolphin-2.6-mistral-7b-dpo
- [LLM Comparison/Test: Ranking updated with 10 new models (the best 7Bs)!](https://www.reddit.com/r/LocalLLaMA/comments/18u122l/llm_comparisontest_ranking_updated_with_10_new/) Winners: mistral-ft-optimized-1218, OpenHermes-2.5-Mistral-7B
- [LLM **Prompt Format** Comparison/Test: Mixtral 8x7B Instruct with \*\*17\*\* different instruct templates](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/)
- [LLM Comparison/Test: Mixtral-8x7B, Mistral, DeciLM, Synthia-MoE](https://www.reddit.com/r/LocalLLaMA/comments/18gz54r/llm_comparisontest_mixtral8x7b_mistral_decilm/) Winner: Mixtral-8x7B-Instruct-v0.1
- [Updated LLM Comparison/Test with new RP model: Rogue Rose 103B](https://www.reddit.com/r/LocalLLaMA/comments/18ft8f5/updated_llm_comparisontest_with_new_rp_model/)
- [**Big** LLM Comparison/Test: 3x 120B, 12x 70B, 2x 34B, GPT-4/3.5](https://www.reddit.com/r/LocalLLaMA/comments/185ff51/big_llm_comparisontest_3x_120b_12x_70b_2x_34b/) Winner: Goliath 120B
- [LLM Format Comparison/Benchmark: 70B GGUF vs. EXL2 (and AWQ)](https://www.reddit.com/r/LocalLLaMA/comments/17w57eu/llm_format_comparisonbenchmark_70b_gguf_vs_exl2/)
- [LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4](https://www.reddit.com/r/LocalLLaMA/comments/17vcr9d/llm_comparisontest_2x_34b_yi_dolphin_nous/) Winners: goliath-120b-GGUF, Nous-Capybara-34B-GGUF
- [LLM Comparison/Test: Mistral 7B Updates (OpenHermes 2.5, OpenChat 3.5, Nous Capybara 1.9)](https://www.reddit.com/r/LocalLLaMA/comments/17p0gut/llm_comparisontest_mistral_7b_updates_openhermes/) Winners: OpenHermes-2.5-Mistral-7B, openchat_3.5, Nous-Capybara-7B-V1.9
- [More…](https://www.reddit.com/user/WolframRavenwolf/submitted/)
--------------------------------------------------------------------------------
[My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested with priority. Also consider tipping your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it! | 2024-01-07T23:34:42 | https://www.reddit.com/r/LocalLLaMA/comments/1916896/llm_comparisontest_confirm_leaderboard_big_news/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1916896 | false | null | t3_1916896 | /r/LocalLLaMA/comments/1916896/llm_comparisontest_confirm_leaderboard_big_news/ | false | false | self | 227 | {'enabled': False, 'images': [{'id': 'bDW7jyCB5L7RKBwRUqrzWSn3bIb_Szu_GogYRebiCjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=108&crop=smart&auto=webp&s=f076a50b0d594dc8ba3b2ee703d67664decf1cba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=216&crop=smart&auto=webp&s=dbc51e386e2d24255edce0cbd6a139d2b37dc0a1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=320&crop=smart&auto=webp&s=13107e47f85ca5d663508f0d9c3bca3648a98f75', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=640&crop=smart&auto=webp&s=f340c6c7589a711ca86aba7661baee1db6acf927', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=960&crop=smart&auto=webp&s=76d5b3a13d8ba4378270e9ae41aa3081e25b37e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?width=1080&crop=smart&auto=webp&s=d17a08361a95b03dd8a9a733ec765497cf2bf0d1', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/tj8OYAHtgushFKKBfbCCyLFmzL7PWZ-Wz8cbqYk-I9E.jpg?auto=webp&s=577b115ae7cd70077bd0dc15f7fe27e71ff19e2b', 'width': 1280}, 'variants': {}}]} |
Finishing out my vacation by releasing the website for my llama.cpp web tool Epistemology | 2 | 2024-01-07T22:58:17 | https://epistemology.dev/ | richardanaya | epistemology.dev | 1970-01-01T00:00:00 | 0 | {} | 1915cfp | false | null | t3_1915cfp | /r/LocalLLaMA/comments/1915cfp/finishing_out_my_vacation_by_releasing_the/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'i4xb5XRw775M6iSuK95lZvLwxeUu2rp9q58t16xb0Lo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ofnrnjdIaRRhGf_0DNFY_QsoT6LIml3wX0dZ-q_nliM.jpg?width=108&crop=smart&auto=webp&s=2a0af4c889c4ab63354c53966f5dbc713047808d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ofnrnjdIaRRhGf_0DNFY_QsoT6LIml3wX0dZ-q_nliM.jpg?width=216&crop=smart&auto=webp&s=762288fdd81aef7ee64bbb9392c6932e60df5839', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/ofnrnjdIaRRhGf_0DNFY_QsoT6LIml3wX0dZ-q_nliM.jpg?width=320&crop=smart&auto=webp&s=61e1966b0fe5818d64628323d4162d0bf2ec66e2', 'width': 320}, {'height': 335, 'url': 'https://external-preview.redd.it/ofnrnjdIaRRhGf_0DNFY_QsoT6LIml3wX0dZ-q_nliM.jpg?width=640&crop=smart&auto=webp&s=15f42b12fcdb244cde7f767d4a14b8faa1ac8b01', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/ofnrnjdIaRRhGf_0DNFY_QsoT6LIml3wX0dZ-q_nliM.jpg?width=960&crop=smart&auto=webp&s=d4a362d6849c86f77b8c94d7622f1f016682cede', 'width': 960}], 'source': {'height': 536, 'url': 'https://external-preview.redd.it/ofnrnjdIaRRhGf_0DNFY_QsoT6LIml3wX0dZ-q_nliM.jpg?auto=webp&s=3d4beaf1fe4143b681df88a08570341afb5f44d1', 'width': 1024}, 'variants': {}}]} | ||
How to fine tune Llama CPP | 0 | Hi, I'm quite a beginner in the llama cpp enviroment and I was wondering if there was any way of fine tunning this version CPU compiled version of llama. I'm currently working with llama-2-13b-chat.ggmlv3.q5\_1.bin from [https://huggingface.co/TheBloke](https://huggingface.co/TheBloke) and I dont know if these kind of models can be fine tunned. It they can, how can this be achieved?
​
Thanks! | 2024-01-07T22:55:28 | https://www.reddit.com/r/LocalLLaMA/comments/19159z2/how_to_fine_tune_llama_cpp/ | Responsible-Sky8889 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19159z2 | false | null | t3_19159z2 | /r/LocalLLaMA/comments/19159z2/how_to_fine_tune_llama_cpp/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'lkhHJv8mdJqfRh1UwBmzWCTs0H4Inw2Ugpa0eeRnBck', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=108&crop=smart&auto=webp&s=de723b6ad3db101dc616591260b08a417f299523', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=216&crop=smart&auto=webp&s=a348fea00b8418fdeedc2667262eb150c2f63ac3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=320&crop=smart&auto=webp&s=83f6ff46606628dac48f751f260a4b9b375cd44c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=640&crop=smart&auto=webp&s=16bebb4a0b444dde01f0f7f7b95382c40f37819a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=960&crop=smart&auto=webp&s=bd7e1cc5eb2f3cf3f6a417c06ce4bc4e123df0a6', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?width=1080&crop=smart&auto=webp&s=9af1bc8ee2fd5d2af94a297394be8943db9f81a4', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2XWjLZRgmp4gyRdh7UWJJdFGyHYAlWYL9Q-rEfrT1Zs.jpg?auto=webp&s=e926e6291516a57e5d87ab5afc88ee01002ad9de', 'width': 1200}, 'variants': {}}]} |
Looking for beta testers for wiseone.ai | 100 | I'm launching an ai platform that i built on llama, called wiseone.ai. Basically, it's the only platform around that provides an affordable, holistic analysis for stocks and digital assets. Providing real-time signals for changes in the 3 pillars of price action (social, charting, and fundamentals). | 2024-01-07T22:48:00 | https://v.redd.it/inxbxwq7k3bc1 | 2ez4irl | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 191532w | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/inxbxwq7k3bc1/DASHPlaylist.mpd?a=1707259696%2CNDhlYzA0ZDAyODNlYWQzZDRhNjU0ZjdiYWNkNmJlMTAwOGViMDYyYTI5NTBkNDQyZjhiNGIwNjJhMDc0N2JkNA%3D%3D&v=1&f=sd', 'duration': 38, 'fallback_url': 'https://v.redd.it/inxbxwq7k3bc1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 744, 'hls_url': 'https://v.redd.it/inxbxwq7k3bc1/HLSPlaylist.m3u8?a=1707259696%2CNDk1ZmY0NjYzOWFiZDA0NjIwMTIzMmQ3MjdkZWEzYjRjN2Q4YmY2NDMxMmFjYTZjNTBiZmI2MGZiNGVhODRlNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/inxbxwq7k3bc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_191532w | /r/LocalLLaMA/comments/191532w/looking_for_beta_testers_for_wiseoneai/ | false | false | 100 | {'enabled': False, 'images': [{'id': 'eml1eWFlbjdrM2JjMcZxIT-GEJ4Veh-mzFgzTTW_OyiCHUNS8HSdMzUXm7XZ', 'resolutions': [{'height': 111, 'url': 'https://external-preview.redd.it/eml1eWFlbjdrM2JjMcZxIT-GEJ4Veh-mzFgzTTW_OyiCHUNS8HSdMzUXm7XZ.png?width=108&crop=smart&format=pjpg&auto=webp&s=a67a74945828cba7ec8dad0f52b6a0b95ef98a25', 'width': 108}, {'height': 223, 'url': 'https://external-preview.redd.it/eml1eWFlbjdrM2JjMcZxIT-GEJ4Veh-mzFgzTTW_OyiCHUNS8HSdMzUXm7XZ.png?width=216&crop=smart&format=pjpg&auto=webp&s=032b4e9e50730e3104c0f8f449803624ca44d1fc', 'width': 216}, {'height': 330, 'url': 'https://external-preview.redd.it/eml1eWFlbjdrM2JjMcZxIT-GEJ4Veh-mzFgzTTW_OyiCHUNS8HSdMzUXm7XZ.png?width=320&crop=smart&format=pjpg&auto=webp&s=cc809cd6ca1e44d0d5c6d2117231a2b5365d42f1', 'width': 320}, {'height': 661, 'url': 'https://external-preview.redd.it/eml1eWFlbjdrM2JjMcZxIT-GEJ4Veh-mzFgzTTW_OyiCHUNS8HSdMzUXm7XZ.png?width=640&crop=smart&format=pjpg&auto=webp&s=88c65ea650e32c8ec344a7e36e49273e0a24fa19', 'width': 640}, {'height': 992, 'url': 'https://external-preview.redd.it/eml1eWFlbjdrM2JjMcZxIT-GEJ4Veh-mzFgzTTW_OyiCHUNS8HSdMzUXm7XZ.png?width=960&crop=smart&format=pjpg&auto=webp&s=e31bfd63cb9bb240d05769a3c870d207f1ee8a2b', 'width': 960}, {'height': 1116, 'url': 'https://external-preview.redd.it/eml1eWFlbjdrM2JjMcZxIT-GEJ4Veh-mzFgzTTW_OyiCHUNS8HSdMzUXm7XZ.png?width=1080&crop=smart&format=pjpg&auto=webp&s=779bfdd87c4e6ae6bdc934578c66e6b50fd2ab1c', 'width': 1080}], 'source': {'height': 1116, 'url': 'https://external-preview.redd.it/eml1eWFlbjdrM2JjMcZxIT-GEJ4Veh-mzFgzTTW_OyiCHUNS8HSdMzUXm7XZ.png?format=pjpg&auto=webp&s=98c8b0ef1536c7f815ced81edb5e8d7ac6335691', 'width': 1080}, 'variants': {}}]} | |
🚀 Completely Local RAG with Ollama Web UI, in Two Docker Commands! | 59 | Hey everyone!
We're back with some fantastic news! Following [your invaluable feedback on ollama-webui,](https://www.reddit.com/r/LocalLLaMA/comments/17edvbx/run_local_llms_with_a_userfriendly_web_ui_in_two/) we've supercharged our webui with new, powerful features, making it the ultimate choice for local LLM enthusiasts. Here's what's new in ollama-webui:
🔍 **Completely Local RAG Support** \- Dive into rich, contextualized responses with our newly integrated Retriever-Augmented Generation (RAG) feature, all processed locally for enhanced privacy and speed.
[Figure 1](https://preview.redd.it/hbk3j3c6e3bc1.png?width=1570&format=png&auto=webp&s=6102ea9df88c3393c76e7690460e3b60421b0a99)
​
[Figure 2](https://preview.redd.it/yd64l1s8e3bc1.png?width=1692&format=png&auto=webp&s=7ef27ba3ab3d1e3933f57f2700f258acf1e1f097)
🔐 **Advanced Auth with RBAC** \- Security is paramount. We've implemented Role-Based Access Control (RBAC) for a more secure, fine-grained authentication process, ensuring only authorized users can access specific functionalities.
🌐 **External OpenAI Compatible API Support** \- Integrate seamlessly with your existing OpenAI applications! Our enhanced API compatibility makes Ollama-webui a versatile tool for various use cases.
📚 **Prompt Library** \- Save time and spark creativity with our curated prompt library, a reservoir of inspiration for your LLM interactions.
And **More**! **Ollama-webui GitHub Repo:** [Ollama Web UI](https://github.com/ollama-webui/ollama-webui)
**Installing the latest ollama-webui** is still a breeze. Just follow these simple steps:
**Step 1: Install Ollama**
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama:latest
**Step 2: Launch Ollama WebUI with the new features**
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v ollama-webui:/app/backend/data --name ollama-webui --restart always ghcr.io/ollama-webui/ollama-webui:main
**Installation Guide w/ Docker Compose:** [https://github.com/ollama-webui/ollama-webui#how-to-install-](https://github.com/ollama-webui/ollama-webui#how-to-install-)
**Demo:**
https://i.redd.it/ivdvtyp5d3bc1.gif
​
We're on a mission to make ollama-webui the best Local LLM web interface out there. Your input has been crucial in this journey, and we're excited to see where it takes us next.
Give these new features a try and let us know your thoughts. Your feedback is the driving force behind our continuous improvement!
Thanks for being a part of this journey, Stay tuned for more updates. We're just getting started! 🌟 | 2024-01-07T22:17:43 | https://www.reddit.com/r/LocalLLaMA/comments/1914cam/completely_local_rag_with_ollama_web_ui_in_two/ | tjrbk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1914cam | false | null | t3_1914cam | /r/LocalLLaMA/comments/1914cam/completely_local_rag_with_ollama_web_ui_in_two/ | false | false | 59 | {'enabled': False, 'images': [{'id': 'dhTlZh6SNGgRTcdqNn28yyk8s08fh45qFB9w-MvugLE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DwiE0gsZon6vpLAZYNzK4qeabUX2fSE9h6XvRAY_ChM.jpg?width=108&crop=smart&auto=webp&s=f8d17ebe5c7b996e8ee1a5acf34a24847e1ca1ce', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DwiE0gsZon6vpLAZYNzK4qeabUX2fSE9h6XvRAY_ChM.jpg?width=216&crop=smart&auto=webp&s=3f2c1444b085c8eab97c6d68e94fcc1f8562c7cd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DwiE0gsZon6vpLAZYNzK4qeabUX2fSE9h6XvRAY_ChM.jpg?width=320&crop=smart&auto=webp&s=55e485c8fb1638d46d8acf8cde71a83eb681b30f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DwiE0gsZon6vpLAZYNzK4qeabUX2fSE9h6XvRAY_ChM.jpg?width=640&crop=smart&auto=webp&s=1e52cb41deea00bd9b17765b6854eeafc9337430', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DwiE0gsZon6vpLAZYNzK4qeabUX2fSE9h6XvRAY_ChM.jpg?width=960&crop=smart&auto=webp&s=38b7c5bfb6b04dd2e0acf54fa078e34c6900c32a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DwiE0gsZon6vpLAZYNzK4qeabUX2fSE9h6XvRAY_ChM.jpg?width=1080&crop=smart&auto=webp&s=9156aff6fef97429634de3ce222c353be725af8b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DwiE0gsZon6vpLAZYNzK4qeabUX2fSE9h6XvRAY_ChM.jpg?auto=webp&s=78d70a5702a9c3cafbd1b6a5de22808c323c8a14', 'width': 1200}, 'variants': {}}]} | |
7b models long multiturn test 2: mixtrals, kunoichi, laser +more | 32 | long story short: most models score a little too well on benchmark and then underperform in real chatting. I needed a way to stack and rank tunes to find the best for my purposes. I built a fixed long twisting chat with 30+ messages and I'm scoring it with gpt, which is far from ideal, but it's at least something I can run in background and have a table of scores so I don't waste time with bad models.
The scores are 0-10 depending on gpt whim, so just an approximation of a blurry reflection of these models performances and not a leaderboard.
I'm providing links to the chatlog so you can peek under the hood for models of interest.
A lot of models score badly because while well performing they become an incoherent mess at about 8k tokens. This may or may not be important to you, just be aware that many mid models are actually good for single/few turn instruction.
Lastly I noticed that model performance vary quite a bit depending on the prompt format, in unclear and unexpected ways, so I'm testing anything with anything.
here's the table with the last round of testing:
| model | chatml | instruct | vicuna |
| --- | --- | --- | --- |
| mistral-7b-instruct-v0.2.Q5_K_M | [295](https://sharegpt.com/c/b1uhGz0) | [177](https://sharegpt.com/c/59SbKuF) | [321](https://sharegpt.com/c/aP3gPDs) |
| 🆕mixtral-8x7b-instruct-v0.1.Q5_K_M | [288](https://sharegpt.com/c/ZCX8M4h) | [280](https://sharegpt.com/c/iK9mcfT) | [312](https://sharegpt.com/c/w2WCyK6) |
| 🆕silicon-maid-7b.Q5_K_M | [209](https://sharegpt.com/c/EhuP8xI) | [297](https://sharegpt.com/c/zTOIsg1) | [295](https://sharegpt.com/c/6BQ6hfQ) |
| neuralhermes-2.5-mistral-7b.Q5_K_M | [200](https://sharegpt.com/c/oNwckI3) | [270](https://sharegpt.com/c/Xw402Uj) | [295](https://sharegpt.com/c/sgfUF6S) |
| 🆕neuralhermes-2.5-mistral-7b-laser.Q5_K_M | [199](https://sharegpt.com/c/U0AKVyz) | [292](https://sharegpt.com/c/N8CFKm3) | [292](https://sharegpt.com/c/QC8sYwB) |
| 🆕kunoichi-7b.Q5_K_M | [284](https://sharegpt.com/c/uZY2zkv) | [245](https://sharegpt.com/c/63b4uAf) | [258](https://sharegpt.com/c/sts0iEU) |
| toppy-m-7b.Q5_K_M | [280](https://sharegpt.com/c/pTEoyyZ) | [164](https://sharegpt.com/c/zJERlCN) | [205](https://sharegpt.com/c/IAgbota) |
| openhermes-2.5-mistral-7b-16k.Q5_K_M | [244](https://sharegpt.com/c/y0RgZxL) | [250](https://sharegpt.com/c/2NvbmrY) | [279](https://sharegpt.com/c/YKvb2zr) |
| 🆕openchat-3.5-1210.Q5_K_M | [272](https://sharegpt.com/c/cJznlrI) | [254](https://sharegpt.com/c/wbYRX6Z) | [193](https://sharegpt.com/c/LgI0aJK) |
| 🆕starling-lm-7b-alpha.Q5_K_M | [251](https://sharegpt.com/c/UALINwa) | [215](https://sharegpt.com/c/4pLvEKu) | [55](https://sharegpt.com/c/SHulDij) |
| xdan-l1-chat-rl-v1.Q5_K_M | [249](https://sharegpt.com/c/FwTMiSd) | [82](https://sharegpt.com/c/Meso00e) | [139](https://sharegpt.com/c/491QMlA) |
| dolphin-2.6-mistral-7b-dpo.Q5_K_M | [233](https://sharegpt.com/c/LlpXJY4) | [114](https://sharegpt.com/c/w3UrtA1) | [194](https://sharegpt.com/c/r7lGZT0) |
| vicuna-7b-v1.5-16k.Q5_K_M | [212](https://sharegpt.com/c/i59zj2Z) | [212](https://sharegpt.com/c/9u9MxeY) | [232](https://sharegpt.com/c/sQQmIgp) |
| nous-hermes-2-solar-10.7b.Q5_K_M | [40](https://sharegpt.com/c/oAMuVgu) | [199](https://sharegpt.com/c/mG6Oj4s) | [197](https://sharegpt.com/c/uKxCW8w) |
| yarn-mistral-7b-64k.Q5_K_M | [48](https://sharegpt.com/c/G6UOXE0) | [10](https://sharegpt.com/c/ia9Hf46) | [183](https://sharegpt.com/c/aqolu5V) |
| 🆕beyonder-4x7b-v2.Q5_K_M | [151](https://sharegpt.com/c/6HvOOfI) | [131](https://sharegpt.com/c/mPbIV3k) | [163](https://sharegpt.com/c/N4TbfkR) |
| 🆕zephyr-7b-beta.Q5_K_M | [30](https://sharegpt.com/c/Gq7oTxr) | [36](https://sharegpt.com/c/SrtCNJj) | [157](https://sharegpt.com/c/NVkHv6I) |
| openchat_3.5-16k.Q5_K_M | [13](https://sharegpt.com/c/phAcNU1) | [10](https://sharegpt.com/c/4NpE0pl) | [0](https://sharegpt.com/c/9MWFgNm) | | 2024-01-07T22:14:23 | https://www.reddit.com/r/LocalLLaMA/comments/19149av/7b_models_long_multiturn_test_2_mixtrals_kunoichi/ | LoSboccacc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19149av | false | null | t3_19149av | /r/LocalLLaMA/comments/19149av/7b_models_long_multiturn_test_2_mixtrals_kunoichi/ | false | false | self | 32 | null |
GPT 4 was released on 14th March 2023. GPT 4V was released on 25th September 2023. | 5 | If Google release Gemini Ultra which is fully multimodal In January 2024, they would not be that far behind OpenAI.
Even if OpenAI releases GPT 4.5 in January, rumors are that Gemini 2 is already in training and will compete with GPT 5. It may even be released before GPT 5.
Llama 3 will be released in February supposedly as good as GPT but multimodal.
Mistral will also release a GPT 4 level model this year. Not sure it will be open source as mitsral medium is closed source.
Amazon is also training a 2 trillion parameter model. Bigger than GPT 4.
Apple will also launch something close to GPT 4.
I can expect Microsoft to launch their own model too even though they can depend on OpenAI.
Maybe Nvidia will launch one of their models too considering they basically control compute besides Google and apple.
Other companies are also rushing towards the gold.
GPT 4 is an amazing model.
We would probably 7 - 8 better multimodal models than it this year: Gemini (barely), GPT 4.5, Amazon's model, Apple's model, 2-3 models from China, Gemini 2, GPT 5 and maybe even a Llama 4 or Mistral Mega.
This year will be massive. | 2024-01-07T22:09:06 | https://www.reddit.com/r/LocalLLaMA/comments/19144gp/gpt_4_was_released_on_14th_march_2023_gpt_4v_was/ | TysonUsykFury | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19144gp | false | null | t3_19144gp | /r/LocalLLaMA/comments/19144gp/gpt_4_was_released_on_14th_march_2023_gpt_4v_was/ | false | false | self | 5 | null |
It only costs an arm, a leg and your firstborn’s college tuition | 1 | 2024-01-07T22:05:26 | ZABKA_TM | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 191415o | false | null | t3_191415o | /r/LocalLLaMA/comments/191415o/it_only_costs_an_arm_a_leg_and_your_firstborns/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'CkUe7BDLCKRD6oE35WZWPXTcMiBJmgPwgz00E7xnY5I', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/19y8eoikc3bc1.jpeg?width=108&crop=smart&auto=webp&s=2d7bc7a3ede79bf9b00beeefb9a1131a3ffcf933', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/19y8eoikc3bc1.jpeg?width=216&crop=smart&auto=webp&s=fa5893a30657cd574ec7747a0c58eae5ea38fcc3', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/19y8eoikc3bc1.jpeg?width=320&crop=smart&auto=webp&s=d4f43f64a1ea2fbe8ebd9af8266cef2f44e01a95', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/19y8eoikc3bc1.jpeg?width=640&crop=smart&auto=webp&s=eb4323756f42bf3b386a466fc09a4ea7a0da1067', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/19y8eoikc3bc1.jpeg?width=960&crop=smart&auto=webp&s=d03d4e072d8797de116657dd87ff59b489933371', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/19y8eoikc3bc1.jpeg?width=1080&crop=smart&auto=webp&s=8d4539cb8aad2fc33837f31e54cd0d7d50182a21', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/19y8eoikc3bc1.jpeg?auto=webp&s=c869b7b624955a7a0c7e06bd4f9f5aa4f76fd1da', 'width': 4032}, 'variants': {}}]} | |||
GLAI (gguf llama ai) was updated to v0.1.1 ! | 3 | Hi, big updated for the glai project,
Updated to v0.1.1
Large refactor to move ModeDB to a separate package (which is done now).
More details on the new dedicated package here [https://www.reddit.com/r/LocalLLaMA/comments/190k9ps/a\_database\_to\_automatically\_handle\_gguf\_lama/?utm\_source=share&utm\_medium=web2x&context=3](https://www.reddit.com/r/LocalLLaMA/comments/190k9ps/a_database_to_automatically_handle_gguf_lama/?utm_source=share&utm_medium=web2x&context=3)
Also unified API for both easy to use classes.
Added a lot of examples and reworked the whole readme.
Added methods for importing only specific models to a new db dir, as well as importing model data from hugging face repos.
Below is a little excerpt form the README. More info in repo :)
# glai
GGUF LLAMA AI - Package for simplified text generation with Llama models quantized to GGUF format
Provides high level APIs for loading models and generating text completions.
It will find (if needed download) and load the right model for inference with as little as one line of code.
## Installation
To install the package use pip
```
pip install glai
```
## Usage:
You can use one of the two high level classes provided with the package for to easily develop with ai applications.
### Import package
```python
from glai import AutoAI, EasyAI
#it's enough to use one of these, probaly EasyAI will be better except some most basic cases
```
### AutoAI - automatic model loading and inference
`AutoAI` - the easies way out there to use llama models, can generate completions with one line of code, minimal configuration required, uses a library of preconfigured models
```python
ai = AutoAI(name_search="Mistral")
ai.generate("Hello")
```
### EasyAI - straightforward manual model configuration
`EasyAI` - a straightforward high level class allowing to easily use llama models from verified models database (50+ name_x_quantization versions) or import and save dozens of models at once with huggingface gguf repo links, **abstracts away everything related to handling the model configuration, managing model files, downloading, loading the model and tokenizer, message formatting, inference and everything else.** If the model files aren't downloaded it will grab them on the fly before loading the model. By default all models are saved to `gguf_modeldb` package repo and are accessible globally from any project.
You can also provide a specific dir and import only selected models copying their ggufs or downloading them later.
```python
easy = EasyAI()
easy.load_model_db()
easy.find_model_data(name_search="Mistral")
easy.load_ai(max_total_tokens=100)
generated_message = easy.generate("Hello")
```
## Generations are wrapped in an intuitive AIMessage object
The outputs of generations are passed via `AIMessage` data objects,
keeping information on ai/user/system tags (i.e. `[INST], [/INST]`) depending on the type of the message
and the message content.
They can be easily parsed to strings, or just the content be accessed using the attribute `content`
```python
print(generated_message) #prints message with tags
print(generated_message.content) #prints just the inner text
```
### ModelDB - search models and show db info
On the back end searching for model data, adding new model data, downloading ggufs, handling files is done by `ModelDB` from `gguf_modeldb` package. It's methods can be access via `.model_db` attribute on both `AutoAI` and `EasyAI` classes.
```python
from glai import EasyAI()
eai = EasyAI()
eai.load_model_db()
eai.model_db.show_db_info() #prints information on all available models
```
Contributions and PRs are very welcome, so is criticism and feedback! Thanks. | 2024-01-07T21:41:02 | https://www.reddit.com/r/LocalLLaMA/comments/1913fas/glai_gguf_llama_ai_was_updated_to_v011/ | --lael-- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1913fas | false | null | t3_1913fas | /r/LocalLLaMA/comments/1913fas/glai_gguf_llama_ai_was_updated_to_v011/ | false | false | self | 3 | null |
llama2:13b bug or a feature? | 1 | ​
Is the model really narrowed down to tech topics as it eventually answering non tech question?
​
https://preview.redd.it/gq7v5pk753bc1.png?width=635&format=png&auto=webp&s=66752e5ee345f85e612b2fed273d08d8da494d25 | 2024-01-07T21:26:34 | https://www.reddit.com/r/LocalLLaMA/comments/191328g/llama213b_bug_or_a_feature/ | speacexstarlink | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 191328g | false | null | t3_191328g | /r/LocalLLaMA/comments/191328g/llama213b_bug_or_a_feature/ | false | false | 1 | null | |
What are some SoTA settings for *fast* inference? | 3 | I'm running Mistral (tunes) and Mixtral, and trying to get more **tok/s**, even in batch mode, for the sake of generating synth datasets.
I'm interested in both CPU-only and GPU-only inference.
* The `unsloth` library claims 5-20x faster training via custom Triton kernels, does anything like this exist to help inference?
* Is there a difference in speed for different quants?
* Is anyone doing speculative decoding on, say, Mistral + Mixtral?
* Is flashattention automatically applied?
* any caching tricks? batching tricks?
* any *libraries*/*servers* that bring in all applicable tricks?
* any 8x7B Mamba/RWKV yet? (omg that would be amazing) | 2024-01-07T21:23:10 | https://www.reddit.com/r/LocalLLaMA/comments/1912z9v/what_are_some_sota_settings_for_fast_inference/ | BayesMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1912z9v | false | null | t3_1912z9v | /r/LocalLLaMA/comments/1912z9v/what_are_some_sota_settings_for_fast_inference/ | false | false | self | 3 | null |
Ay r/LocalLLaMA got to 100k subs | 1 | [removed] | 2024-01-07T21:09:01 | https://www.reddit.com/r/LocalLLaMA/comments/1912mev/ay_rlocalllama_got_to_100k_subs/ | PopcaanFan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1912mev | false | null | t3_1912mev | /r/LocalLLaMA/comments/1912mev/ay_rlocalllama_got_to_100k_subs/ | false | false | self | 1 | null |
Mixtral won't stop mentioning the provided context in RAG use-case | 2 | Has anyone had success using Mixtral-Instruct for Retrieval Augmented Generation? I can't get it to stop mentioning the provided context in the answer (eg it will respond "Based on the provided context blah blah blah"). I've tried instructing it to not mention the provided context in both the system and user prompt with no result. | 2024-01-07T20:58:52 | https://www.reddit.com/r/LocalLLaMA/comments/1912co3/mixtral_wont_stop_mentioning_the_provided_context/ | Preacher2013 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1912co3 | false | null | t3_1912co3 | /r/LocalLLaMA/comments/1912co3/mixtral_wont_stop_mentioning_the_provided_context/ | false | false | self | 2 | null |
Different attacks on machine learning models | 6 | 2024-01-07T20:52:04 | https://rnikhil.com/2024/01/07/attacking-neural-networks.html | Excellent-Effect237 | rnikhil.com | 1970-01-01T00:00:00 | 0 | {} | 19126pu | false | null | t3_19126pu | /r/LocalLLaMA/comments/19126pu/different_attacks_on_machine_learning_models/ | false | false | default | 6 | null | |
Hey UI developers: Please include screenshots of your UI! | 65 | I've been UI shopping lately, since text-gen-webui is ugly, constantly has to be re-set up every restart, and crashes fully if GGUFs exceed VRAM limit. Exui is nice, but overly simplistic and you can't use GGUFs with it.
There are a couple dozen UIs out there that I know of, and for some reason, less than half of them include screenshots of what they actually look like. UX is a big part of UI. I don't want to go through the hassle of installing your UI just to find out it makes my eyes bleed or it takes 20 steps to set parameters. Please include a few screencaps of your various control panels and windows! | 2024-01-07T20:46:13 | https://www.reddit.com/r/LocalLLaMA/comments/19121n4/hey_ui_developers_please_include_screenshots_of/ | Arcturus17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19121n4 | false | null | t3_19121n4 | /r/LocalLLaMA/comments/19121n4/hey_ui_developers_please_include_screenshots_of/ | false | false | self | 65 | null |
[Urgent] can I make a Llama2 based app which doesn't have to be run on PyTorch? | 1 | Where else can I run it on? | 2024-01-07T20:34:54 | https://www.reddit.com/r/LocalLLaMA/comments/1911riy/urgent_can_i_make_a_llama2_based_app_which_doesnt/ | Accomplished_Cod1099 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1911riy | false | null | t3_1911riy | /r/LocalLLaMA/comments/1911riy/urgent_can_i_make_a_llama2_based_app_which_doesnt/ | false | false | self | 1 | null |
LM Studio - Dolphin 2.7 Mixtral: Stop Generating problem | 1 | Hello guys, I'm using LM studio.
I've tried Dolphin 2.7, as a potential upgrade in creativity, instead of Mixtral Instruct 0.1.
I have a problem, though. In Mixtral Instruct, I can press stop generating, then input another prompt and the LLM responds to it.
But in Dolphin, if I stop it from generating and put something else, the LLM continues on the original prompt, ignoring anything else I input.
Is there a magic setting that can be used to make Dolphin actually listen to my prompts? | 2024-01-07T20:31:16 | https://www.reddit.com/r/LocalLLaMA/comments/1911oc1/lm_studio_dolphin_27_mixtral_stop_generating/ | PwanaZana | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1911oc1 | false | null | t3_1911oc1 | /r/LocalLLaMA/comments/1911oc1/lm_studio_dolphin_27_mixtral_stop_generating/ | false | false | self | 1 | null |
LiteLlama-460M-1T has 460M parameters trained with 1T tokens | 14 | 2024-01-07T20:28:23 | https://huggingface.co/ahxt/LiteLlama-460M-1T | davidmezzetti | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1911lol | false | null | t3_1911lol | /r/LocalLLaMA/comments/1911lol/litellama460m1t_has_460m_parameters_trained_with/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'CfyzbUAmgYUP62Lw81UgnUe8zRwF7BdkrW-By2_ZIk4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/I5g8tjLEDWzCuTN1ZJkfoCVlXoy7weyFgDlHR9h-WVM.jpg?width=108&crop=smart&auto=webp&s=e846d3a16596d2fc2bdf5cbc075f282ea4afa5ba', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/I5g8tjLEDWzCuTN1ZJkfoCVlXoy7weyFgDlHR9h-WVM.jpg?width=216&crop=smart&auto=webp&s=78ef0bd02e46c058cb18915a6336bd3fd1096bef', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/I5g8tjLEDWzCuTN1ZJkfoCVlXoy7weyFgDlHR9h-WVM.jpg?width=320&crop=smart&auto=webp&s=545c9a8cae01e864686e443719e84cb65e14294b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/I5g8tjLEDWzCuTN1ZJkfoCVlXoy7weyFgDlHR9h-WVM.jpg?width=640&crop=smart&auto=webp&s=db0cbbf080d9d17b1ed396eab46a87481f450490', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/I5g8tjLEDWzCuTN1ZJkfoCVlXoy7weyFgDlHR9h-WVM.jpg?width=960&crop=smart&auto=webp&s=f128925d0a777c5a0eb6d3973a588cebbd43894f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/I5g8tjLEDWzCuTN1ZJkfoCVlXoy7weyFgDlHR9h-WVM.jpg?width=1080&crop=smart&auto=webp&s=c972a8fba0fdc10b844d4b45ebe8120d25e04daf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/I5g8tjLEDWzCuTN1ZJkfoCVlXoy7weyFgDlHR9h-WVM.jpg?auto=webp&s=969a8b95e224e5f91943472328fc8bed54b0f370', 'width': 1200}, 'variants': {}}]} | ||
What are some good setups to learn about orchestrating model training / ML infrastructure in general? | 1 | Hello! I’m pretty new to this topic so apologies for any silly questions. I see alot of the topics here related to home setups usually revolve around hosting local models, but is the same type of setup necessary if I just want to learn about the infrastructure? For example, is model training on 3080s vs 4080s (random examples could be any gpu) wildly different besides the speed ?
Is something like nvlink necessary? Thanks for any suggestions! | 2024-01-07T20:10:43 | https://www.reddit.com/r/LocalLLaMA/comments/19115pm/what_are_some_good_setups_to_learn_about/ | ifjo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19115pm | false | null | t3_19115pm | /r/LocalLLaMA/comments/19115pm/what_are_some_good_setups_to_learn_about/ | false | false | self | 1 | null |
My first pipeline using Llama7b / llvm / Langchain / EasyOCR / TinyDB | 11 | I am really happy with the result so I feel the urge to share. Hope it matches the community interest.
I started exploring self hosted AI for a while now, I had a hard time finding out an architecture that had the following requirement:
- Have openai api compatibility, so that I can fallback to using openai if the result were not satisfying
- Pipeline the tasks in a data lake fashion
- Work on Intel Arc. (I bought 5, 16gb for a total of 1000$ on sale), the full machine costed 2000$.
I finally made a dhms, data hoarder management system for my companies accounting, to have an organized filesystem / database to manage all the receipts / invoices I batch scan:
The architecture:
```
+----------+ +---------+ +--------------------+ +---------------------+
| Invoices | --> | EasyOCR | --> | TinyDB | --> | Coherent Filesystem |
+----------+ +---------+ +--------------------+ +---------------------+
^
|
+--------------------+
| Langchain Pipeline |
+--------------------+
```
Langchain:
```markdown
| Langchain Extractor | --> | Llama7b (t=0.8) | --> | Extracted Information | --> | Langchain Formatter | --> | Llama7b (t=0.3) |
```
And it is for a fraction of the cost of openai gpt3.5:
most files average 4096 tokens and it outputs 512 tokens which adds up to for a total of 1000 files: $5024 = 4096 * 0.0010 * 1000 + 0.0020 * 512 * 1000. Even 10% of that price was too high already.
The processing took 4h: 01:12:02 -> 05:19:29 which is insanely fast.
Hosting costs is part of my office lease.
Next Step: Buy some more gpus; perhaps AMDs for a change; Nvidia beeing too expensive. feeding all my daily mails to Llama / Mistral... | 2024-01-07T20:01:36 | https://www.reddit.com/r/LocalLLaMA/comments/1910xpv/my_first_pipeline_using_llama7b_llvm_langchain/ | MoiSanh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1910xpv | false | null | t3_1910xpv | /r/LocalLLaMA/comments/1910xpv/my_first_pipeline_using_llama7b_llvm_langchain/ | false | false | self | 11 | null |
Added JSON generation and streaming with automatic GBNF grammars for JSON schemas to ModelFusion JS/TS client for llama.cpp server | 4 | 2024-01-07T19:46:38 | https://modelfusion.dev/integration/model-provider/llamacpp#generate-structure | lgrammel | modelfusion.dev | 1970-01-01T00:00:00 | 0 | {} | 1910krj | false | null | t3_1910krj | /r/LocalLLaMA/comments/1910krj/added_json_generation_and_streaming_with/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'a-4ml4TxskRWZByl6AYAgCO7u-aGfpUrtD4m7IFAD0A', 'resolutions': [{'height': 45, 'url': 'https://external-preview.redd.it/dzyL6Wk_q-n0FVZ4QBrA9-TfzgiPEgKg32VhjehevuE.jpg?width=108&crop=smart&auto=webp&s=4d509c8940e445f9070b094b77ec7226a76ea9da', 'width': 108}, {'height': 90, 'url': 'https://external-preview.redd.it/dzyL6Wk_q-n0FVZ4QBrA9-TfzgiPEgKg32VhjehevuE.jpg?width=216&crop=smart&auto=webp&s=ef7a04b6210f83befbc12d3862aaad1a3fd1b3e0', 'width': 216}, {'height': 134, 'url': 'https://external-preview.redd.it/dzyL6Wk_q-n0FVZ4QBrA9-TfzgiPEgKg32VhjehevuE.jpg?width=320&crop=smart&auto=webp&s=9e36757274c0e8c413f116d08ace92945e652f81', 'width': 320}, {'height': 268, 'url': 'https://external-preview.redd.it/dzyL6Wk_q-n0FVZ4QBrA9-TfzgiPEgKg32VhjehevuE.jpg?width=640&crop=smart&auto=webp&s=24dc984feef16403e3a1a772679161ebf34ad7a4', 'width': 640}, {'height': 403, 'url': 'https://external-preview.redd.it/dzyL6Wk_q-n0FVZ4QBrA9-TfzgiPEgKg32VhjehevuE.jpg?width=960&crop=smart&auto=webp&s=2ffb072c973cc0a6e25c712682d0533b43516a14', 'width': 960}], 'source': {'height': 420, 'url': 'https://external-preview.redd.it/dzyL6Wk_q-n0FVZ4QBrA9-TfzgiPEgKg32VhjehevuE.jpg?auto=webp&s=ee069f137cd338089fa829a7028d632daf819fa8', 'width': 1000}, 'variants': {}}]} | ||
How did I miss this? During Q-star speculation drama, Yann LeCun said Sam Altman has a long track record of self delusion. Either Mr. LeCun is arrogant or he has a mole inside OpenAI who informed him that Q-Star is all hype, no breakthrough. | 1 | 2024-01-07T19:28:44 | https://www.reddit.com/gallery/19105lz | TysonUsykFury | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 19105lz | false | null | t3_19105lz | /r/LocalLLaMA/comments/19105lz/how_did_i_miss_this_during_qstar_speculation/ | false | false | 1 | null | ||
Cpu with gpu offloading oobabooga | 1 | I have an amd ryzen 9 7940hs with an integrated graphics card which has 20gb of storage( vram i proposes) is it possible to use bith my cpu and integrated gpu? If yes how | 2024-01-07T19:23:25 | https://www.reddit.com/r/LocalLLaMA/comments/19100xc/cpu_with_gpu_offloading_oobabooga/ | Single-Cow-5163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 19100xc | false | null | t3_19100xc | /r/LocalLLaMA/comments/19100xc/cpu_with_gpu_offloading_oobabooga/ | false | false | self | 1 | null |
It finally happened: lmsys Chat Arena is also gamed: Multiple <70B models ranked higher than Llama 2 70B. Example: Starling 7B suspiciously ranked above Llama 2 70B and Perplexity 70B. | 1 | 2024-01-07T19:15:45 | nderstand2grow | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 190zuix | false | null | t3_190zuix | /r/LocalLLaMA/comments/190zuix/it_finally_happened_lmsys_chat_arena_is_also/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'upVxFz7emM-wkjAmejcvxSJI_nx03VjSNIGQszL3YAM', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/zdl6om0bi2bc1.png?width=108&crop=smart&auto=webp&s=8db2f2694e388a517e6daa58958878f4ce56a4aa', 'width': 108}, {'height': 123, 'url': 'https://preview.redd.it/zdl6om0bi2bc1.png?width=216&crop=smart&auto=webp&s=94a3b472eb74bb098571501aa73c591ede932dbc', 'width': 216}, {'height': 183, 'url': 'https://preview.redd.it/zdl6om0bi2bc1.png?width=320&crop=smart&auto=webp&s=f4ed93a7536f64abb022e80f623f872a30ef064d', 'width': 320}, {'height': 367, 'url': 'https://preview.redd.it/zdl6om0bi2bc1.png?width=640&crop=smart&auto=webp&s=b2fa21dd6a617c6a66b77ad109fb8631f38d7376', 'width': 640}, {'height': 550, 'url': 'https://preview.redd.it/zdl6om0bi2bc1.png?width=960&crop=smart&auto=webp&s=2dca41a76533a51f28b22870ff775b54203363ad', 'width': 960}, {'height': 619, 'url': 'https://preview.redd.it/zdl6om0bi2bc1.png?width=1080&crop=smart&auto=webp&s=da7c5313e4ac0d96b600b04daaad1ad69211582a', 'width': 1080}], 'source': {'height': 1362, 'url': 'https://preview.redd.it/zdl6om0bi2bc1.png?auto=webp&s=1595a30afbb851d9add8fab5ab8c9a9ab8cbb8a3', 'width': 2374}, 'variants': {}}]} | |||
Venus 120b v1.2 | 35 | https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.2
Hey everyone, this week I got around to creating another frankenmerge model in the "Venus" lineup. For this one I only used a single model: lzlv_70b. Out of all the 70b models I've used, this one overall has the best combination of instruction following and creativity, so I decided to see what would happen if I just interleaved its layers with itself.
The result seems pretty good. I've been testing it out for a couple days and it's better at staying on topic and following instructions than either of its two predecessors and it also doesn't seem to have issues with censorship.
Let me know what you all think! | 2024-01-07T19:13:23 | https://www.reddit.com/r/LocalLLaMA/comments/190zsk1/venus_120b_v12/ | nsfw_throwitaway69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190zsk1 | false | null | t3_190zsk1 | /r/LocalLLaMA/comments/190zsk1/venus_120b_v12/ | false | false | self | 35 | null |
Tips for Following the Latest News and Finding Quality Content on LLMs without Doomscrolling? | 26 | I'm deeply interested in LLMs, and I'm looking to stay updated with the latest ideas, papers, and developments in this field. Platforms like Twitter, Reddit, and YouTube are rich in content, but I often find myself doomscrolling through endless feeds, which can be overwhelming and time-consuming.
I'd love to hear your strategies or tools for efficiently finding high-quality, relevant content about large language models, without getting lost in the endless stream of information. Specifically, I'm looking for:
1. Techniques or habits for effectively finding targeted content on large language models across platforms like Twitter, Reddit, and YouTube.
2. Recommendations for specific accounts, channels, or communities that consistently provide valuable insights into this field.
3. Tips on avoiding the trap of doomscrolling while searching for quality information.
4. Any tools or services that help in curating or filtering content related to large language models.
Your personal experiences and suggestions would be incredibly helpful. I'm eager to learn from your approaches and improve the way I engage with this content.
Thank you in advance for your help!
Here are some previous posts I found on this topic, however I would love to see the latest sources and tips, thank you:
What is your LLM news feed ? (5 months ago)
[https://www.reddit.com/r/LocalLLaMA/comments/15f04h5/what\_is\_your\_llm\_news\_feed/](https://www.reddit.com/r/LocalLLaMA/comments/15f04h5/what_is_your_llm_news_feed/)
How to stay up-to-date in the (open-source) AI field? Recommendations for blogs and other resources? (9 months ago): [https://www.reddit.com/r/LocalLLaMA/comments/12ecl7k/how\_to\_stay\_uptodate\_in\_the\_opensource\_ai\_field/](https://www.reddit.com/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/) | 2024-01-07T19:04:44 | https://www.reddit.com/r/LocalLLaMA/comments/190zl85/tips_for_following_the_latest_news_and_finding/ | alpsencer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190zl85 | false | null | t3_190zl85 | /r/LocalLLaMA/comments/190zl85/tips_for_following_the_latest_news_and_finding/ | false | false | self | 26 | null |
Fine-tuning for Natural Language to SPARQL queries | 1 | I'm trying to fine-tune a LLM to generate SPARQL from natural language. I started with llama but I quickly ran out of credits on Google Colab. Now I'm waiting for my university to grant me access to their cloud resources. Until then what other models do you recommend? Currently I'm playing with Flan-T5, is it worth fine-tuning it for this task or am I just wasting my time? | 2024-01-07T18:46:27 | https://www.reddit.com/r/LocalLLaMA/comments/190z57f/finetuning_for_natural_language_to_sparql_queries/ | Optimal_Fishing2072 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190z57f | false | null | t3_190z57f | /r/LocalLLaMA/comments/190z57f/finetuning_for_natural_language_to_sparql_queries/ | false | false | self | 1 | null |
[SUGGESTION] Suggestion for an LLM API Server with document ingestion support | 1 | Hi everyone,
what open source LLM server do you suggest me with API support for chat, document ingestion and document querying?
​
What I want to achieve is to create a .NET client, possibly a Desktop Application, that can manage the server, therefore start and stop and can interact with it with an API that allows to choose a model for chatting and to ingest and the query uploaded documents.
​
I am not sure if the best approach would be a docker image so that it would be easier to make the whole thing run on different machines.
​
I would test the server from a hyperv ubuntu with docker, therefore I suppose the server won't be able to use CUDA, even if the PC has an NVIDIA 3080 (Laptop).
​
Thanks in advance for your suggestions! | 2024-01-07T18:24:36 | https://www.reddit.com/r/LocalLLaMA/comments/190ym9b/suggestion_suggestion_for_an_llm_api_server_with/ | darcome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190ym9b | false | null | t3_190ym9b | /r/LocalLLaMA/comments/190ym9b/suggestion_suggestion_for_an_llm_api_server_with/ | false | false | self | 1 | null |
I just now added a paragraph to the Wikipedia article on Generative AI regarding open-source, including mention of r/LocalLLaMA | 17 | 2024-01-07T17:48:59 | https://en.wikipedia.org/w/index.php?title=Generative_artificial_intelligence&diff=1194184338&oldid=1194158714 | michaelmalak | en.wikipedia.org | 1970-01-01T00:00:00 | 0 | {} | 190xrz1 | false | null | t3_190xrz1 | /r/LocalLLaMA/comments/190xrz1/i_just_now_added_a_paragraph_to_the_wikipedia/ | false | false | 17 | {'enabled': False, 'images': [{'id': 'rlzXjNiW5fnwpwzacq9hjZb6-rTg3zw5OzhBYt44w5k', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/fKR_a9c_RY0cGt7_njxh2i6BrTeIrQx_hOJRF6zZfSU.jpg?width=108&crop=smart&auto=webp&s=fefb1e59ea2c639d8f10bc43a9ef243065d7f504', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/fKR_a9c_RY0cGt7_njxh2i6BrTeIrQx_hOJRF6zZfSU.jpg?width=216&crop=smart&auto=webp&s=cee2c4e95a54c3043bb75cffd108ef061c593e54', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/fKR_a9c_RY0cGt7_njxh2i6BrTeIrQx_hOJRF6zZfSU.jpg?width=320&crop=smart&auto=webp&s=25ea45ee932d8ab81d7c6f02fb623aa9a36e359a', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/fKR_a9c_RY0cGt7_njxh2i6BrTeIrQx_hOJRF6zZfSU.jpg?width=640&crop=smart&auto=webp&s=663710f5f0cd18d3adf72f84dc48375784e4dd0f', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/fKR_a9c_RY0cGt7_njxh2i6BrTeIrQx_hOJRF6zZfSU.jpg?width=960&crop=smart&auto=webp&s=23d6ae1cd058f59186883281024e9a0ecaae0328', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/fKR_a9c_RY0cGt7_njxh2i6BrTeIrQx_hOJRF6zZfSU.jpg?width=1080&crop=smart&auto=webp&s=7a64f8d844d0aa70ef1bda12a5a727cb5ff6f62d', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/fKR_a9c_RY0cGt7_njxh2i6BrTeIrQx_hOJRF6zZfSU.jpg?auto=webp&s=dd8a464bf0fdf9b54c7e4982ecc0c1c39004ffc1', 'width': 1200}, 'variants': {}}]} | ||
Can a RAG in conjunction with an LLM create a simulated version of consciousness? | 1 | In this scenario, using a RAG setup to "save" memories or experiences from interactions involves storing them in a retrievable, vectorized database. The language model would then access these stored memories when relevant to a given situation, effectively using past interactions to inform its current responses.
The challenge I see is understanding what information from an interaction needs to be picked and stored for later use. Then you have the reverse problem of knowing what information needs to be fed back into the LLM when given a scenario.
My idea of thought is a stream of ideas or “stream of consciousness”. All thoughts are informed by past thoughts and interactions as well as interactions with external forces (the environment, other people, etc.). With a RAG, could you not be constantly feeding an LLM inputs from some database and then updating the database with whatever “conclusion” the LLM outputs based on the LLM’s artificially created inputs as well as inputs from external sources?
If the problem of knowing what information to store and what information to feed back in are solved, could this strategy enable the creation of an agent that has the version of consciousness that I just described? Where does this idea fail? What am I not considering? | 2024-01-07T17:43:40 | https://www.reddit.com/r/LocalLLaMA/comments/190xnij/can_a_rag_in_conjunction_with_an_llm_create_a/ | ApprehensiveJob171 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190xnij | false | null | t3_190xnij | /r/LocalLLaMA/comments/190xnij/can_a_rag_in_conjunction_with_an_llm_create_a/ | false | false | self | 1 | null |
Cant continue old chats | 3 | Am I the only one who cant continue chats after i close them with shorter chats its manageable but the more tokens the longer they take to load to the point where after like 2000 tokens they dont load anymore while rp there's no problem and im not maxing out my ram either. Im running in cpu only | 2024-01-07T17:42:09 | https://www.reddit.com/r/LocalLLaMA/comments/190xm5x/cant_continue_old_chats/ | Single-Cow-5163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190xm5x | false | null | t3_190xm5x | /r/LocalLLaMA/comments/190xm5x/cant_continue_old_chats/ | false | false | self | 3 | null |
Used GPT4ALL for the first time and wondering if I could somehow feed it (technical) PDFs and turn it into "sidekick" for embedded programming? | 5 | I've just encountered a YT video that talked about GPT4ALL and it got me really curious, as I've always liked Chat-GPT - until it got bad.
I'm doing some embedded programming on all kinds of hardware - like STM32 Nucleo boards and Intel based FPGAs, and every board I own comes with a huge technical PDF that specificies where every peripheral is located on the board and how it should be addressed, and what are it's limitations etc..
So I thought if it would be possible to shove those technical PDFs into some model and have it spit out the info I need on demand, instead of manually going through the PDFs myself? Like asking it which pins on the board are connected to "UART1" and having it calculate the supported baudrate in respect to that peripheral and the device's current assigned clockspeed etc.
I'm not really interested in the tech behind large language models (I have no idea what's the difference between models \\ software) , just wondering if this can be done locally and offline?
​ | 2024-01-07T17:26:15 | https://www.reddit.com/r/LocalLLaMA/comments/190x884/used_gpt4all_for_the_first_time_and_wondering_if/ | SaarN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190x884 | false | null | t3_190x884 | /r/LocalLLaMA/comments/190x884/used_gpt4all_for_the_first_time_and_wondering_if/ | false | false | self | 5 | null |
Mixtral vs Mistral Medium | 2 | Does anyone have a viewpoint about which is best given that they have been out for some time now. | 2024-01-07T16:51:39 | https://www.reddit.com/r/LocalLLaMA/comments/190wfaf/mixtral_vs_mistral_medium/ | ComprehensiveWord477 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190wfaf | false | null | t3_190wfaf | /r/LocalLLaMA/comments/190wfaf/mixtral_vs_mistral_medium/ | false | false | self | 2 | null |
RAG for RPG Rules and Years of Campaign Data - Feasibility and Limitations? | 13 | A friend asked me if it is possible to put all those data (mostly pdf, excel sheets and latex) into a llm and get useful information. I know RAG is one of the possible solutions, but i can not find good information on how large the data can be for single documents or in whole. also on how truthful all the answers will be.
I'm looking for insights on the following key questions:
* What are the limitations regarding data capacity for individual documents and the system as a whole?
* How reliable and accurate can the output from such a model be expected to be?
* Is the Mistral model suited for this kind of task?
* Are there any comprehensive step-by-step guides available for setting up and using an LLM in this context?
* What potential challenges or important considerations might I have overlooked?
The aim is to deploy this system on a local setup, utilizing a Debian-based PC with an NVIDIA RTX 4080. Any experiences, advice, or insights, especially from those who have tackled similar projects, would be immensely helpful. What do you think, Reddit? | 2024-01-07T16:51:06 | https://www.reddit.com/r/LocalLLaMA/comments/190wetc/rag_for_rpg_rules_and_years_of_campaign_data/ | _ralph_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190wetc | false | null | t3_190wetc | /r/LocalLLaMA/comments/190wetc/rag_for_rpg_rules_and_years_of_campaign_data/ | false | false | self | 13 | null |
LLM + translate model vs Finetuning from scratch | 1 | [removed] | 2024-01-07T16:49:36 | https://www.reddit.com/r/LocalLLaMA/comments/190wdi8/llm_translate_model_vs_finetuning_from_scratch/ | maayon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190wdi8 | false | null | t3_190wdi8 | /r/LocalLLaMA/comments/190wdi8/llm_translate_model_vs_finetuning_from_scratch/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nMQgcA_y0RC4hgFePWX8SlBkPjhSXRjtB_ZiIK-2zK4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WWZy1A3cELSjML621MsbIuDR9vvH8NMpjCuaHv_Cw_U.jpg?width=108&crop=smart&auto=webp&s=c441b9f66175340fee4d0c143cb9c89bca6d4cd5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WWZy1A3cELSjML621MsbIuDR9vvH8NMpjCuaHv_Cw_U.jpg?width=216&crop=smart&auto=webp&s=9fb4995d796d37707ec04944c9aa5c4563fa3f94', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WWZy1A3cELSjML621MsbIuDR9vvH8NMpjCuaHv_Cw_U.jpg?width=320&crop=smart&auto=webp&s=a783c68e1e63027045696dc025e09ca6f41069a7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WWZy1A3cELSjML621MsbIuDR9vvH8NMpjCuaHv_Cw_U.jpg?width=640&crop=smart&auto=webp&s=18f68b186ac1f156f1e9e42bb1bb526e8d3476ab', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WWZy1A3cELSjML621MsbIuDR9vvH8NMpjCuaHv_Cw_U.jpg?width=960&crop=smart&auto=webp&s=2864d792f5bff8cda8f159c5b8c1694b1fcdbc18', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WWZy1A3cELSjML621MsbIuDR9vvH8NMpjCuaHv_Cw_U.jpg?width=1080&crop=smart&auto=webp&s=6b1053a76648d956b94367ebe2b7222e416fb3cf', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WWZy1A3cELSjML621MsbIuDR9vvH8NMpjCuaHv_Cw_U.jpg?auto=webp&s=e0e33bcd2f253df9c2a11b8848e09caf59403540', 'width': 1200}, 'variants': {}}]} |
Thinks to look for in a PC build for AI | 3 | So I am thinking of building a new computer with a strong focus on being able to run models like llama 7b and mixtral 8x7b locally, and I was wondering what I should focus on. A while back I heard everyone saying a good GPU was important with a lot of vram. But now I feel like I hear a lot of people saying ram and cpu is important. So what should I focus on?
PS. I am thinking about going for a 4090, thoughts?
And btw money is not a problem. | 2024-01-07T16:48:40 | https://www.reddit.com/r/LocalLLaMA/comments/190wcsg/thinks_to_look_for_in_a_pc_build_for_ai/ | Schenk06 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190wcsg | false | null | t3_190wcsg | /r/LocalLLaMA/comments/190wcsg/thinks_to_look_for_in_a_pc_build_for_ai/ | false | false | self | 3 | null |
My Codellama model (merge + additional FT) Quantized as AWQ & GGUF | 15 | I held off on posting this because there are some who seem to enjoy going around and bashing models just because, and there was that one post a few weeks ago going off the rails implying that anyone doing merging was basically just in it to scam people.
However, a bunch of cool people here recommended I go ahead and post it so here it is. This model started out as just something I was working on to learn fine tuning, but during that process I learned about merging so I decided to restart and work that in as well.
This model is the result of merging a few Codellama models with some additional fine tuning on top of the merged model.
It's been decent at what I've thrown at it (mostly python and some c++), but I don't expect it to be earth shattering. As a long time Linux/BSD user and developer, I take the Open Source Ethos to heart, so I have wanted to share this for others to tinker with or build off of it they want. As I mentioned this was mostly a learning experience for me, but I ended up with something I'm happy with so I'm sharing it.
I've already quantized this in AWQ and GGUF format. Everything here: [https://huggingface.co/BallisticAI](https://huggingface.co/BallisticAI)
I think the future for code models is in smaller focused models. ie, one for Python, one for JS, one for C/C++. I think that will result in better quality code as well as being more flexible where it can run. If I can get time I'm going to slowly work on fine tuning a smaller model to be focused entirely on one language, hopefully to end up with even better results.
​ | 2024-01-07T16:42:47 | https://www.reddit.com/r/LocalLLaMA/comments/190w80c/my_codellama_model_merge_additional_ft_quantized/ | q5sys | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190w80c | false | null | t3_190w80c | /r/LocalLLaMA/comments/190w80c/my_codellama_model_merge_additional_ft_quantized/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'oheqXgiJdUktp8kctkFMNFrEv65NeVHIs3GKemvHMXg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_WWv7A9e_AZ8Dj-8Anf5ZaHeYHalb3skMhuNk3wzk-w.jpg?width=108&crop=smart&auto=webp&s=d9f80099aa4f003718191b7912c8626687a56aea', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/_WWv7A9e_AZ8Dj-8Anf5ZaHeYHalb3skMhuNk3wzk-w.jpg?width=216&crop=smart&auto=webp&s=3a936c4d6f9b18cb4d52490ac418707488cc639b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/_WWv7A9e_AZ8Dj-8Anf5ZaHeYHalb3skMhuNk3wzk-w.jpg?width=320&crop=smart&auto=webp&s=490711eec7849ebc14ad55023f31dbdc4b2ac41d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/_WWv7A9e_AZ8Dj-8Anf5ZaHeYHalb3skMhuNk3wzk-w.jpg?width=640&crop=smart&auto=webp&s=de122da228660eed87342bc13ba27d5a67fc7160', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/_WWv7A9e_AZ8Dj-8Anf5ZaHeYHalb3skMhuNk3wzk-w.jpg?width=960&crop=smart&auto=webp&s=887d516cfad54f704f47c31b915db8f533822aff', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/_WWv7A9e_AZ8Dj-8Anf5ZaHeYHalb3skMhuNk3wzk-w.jpg?width=1080&crop=smart&auto=webp&s=37084607a51f059153150e507a3ade0b4d7138dd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/_WWv7A9e_AZ8Dj-8Anf5ZaHeYHalb3skMhuNk3wzk-w.jpg?auto=webp&s=68139ec7bbb690bb0515b657f4a6166436985af1', 'width': 1200}, 'variants': {}}]} |
Does the inference time increase with longer inputs? | 3 | Hey, does the inference time (ms per token) change depending on how long the input is?
Let's say I have a text and want the LLM to give me a summary of the text. Does the speed of the LLM decrease when the text it summarizes is longer? And if so, how big will the difference be?
For example text1 is 100 token long and text2 is 2000 token long. Will text1 and text2 be summarized through the LLM with the same speed or will it take e.g. 20x as long for text2 to be summarized?
I googled but I didn't find an answer. Probably because it's a dumb question, so sorry in advance! | 2024-01-07T16:39:53 | https://www.reddit.com/r/LocalLLaMA/comments/190w5or/does_the_inference_time_increase_with_longer/ | Bluhbaerie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190w5or | false | null | t3_190w5or | /r/LocalLLaMA/comments/190w5or/does_the_inference_time_increase_with_longer/ | false | false | self | 3 | null |
How do I use MedPrompt with GPT? | 3 | I am trying to do research where I input different case reports and see what GPTs output. However, how do I implement MedPrompt architecture with GPTs?
[https://www.microsoft.com/en-us/research/blog/the-power-of-prompting/](https://www.microsoft.com/en-us/research/blog/the-power-of-prompting/)
[https://github.com/microsoft/promptbase](https://github.com/microsoft/promptbase)
Here is the linked github.
​ | 2024-01-07T15:57:20 | https://www.reddit.com/r/LocalLLaMA/comments/190v71b/how_do_i_use_medprompt_with_gpt/ | InDebt2Medicine | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190v71b | false | null | t3_190v71b | /r/LocalLLaMA/comments/190v71b/how_do_i_use_medprompt_with_gpt/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'L9Iu-YaJRBYICojgBF-pdkfscUg3ao4hmHKLsANIQdY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/mOGfZtwLuTHmCdccq5tx_8hm9uAQnsyHScz8Fz8_nn0.jpg?width=108&crop=smart&auto=webp&s=cb6eeb00a7ffb3a23e1eb5eeecf563516da0807e', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/mOGfZtwLuTHmCdccq5tx_8hm9uAQnsyHScz8Fz8_nn0.jpg?width=216&crop=smart&auto=webp&s=61dcccbb8ebd647e7fd05d4e0848345169b98140', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/mOGfZtwLuTHmCdccq5tx_8hm9uAQnsyHScz8Fz8_nn0.jpg?width=320&crop=smart&auto=webp&s=755d0852155be3d0b009b1d47312ad3df76acb3f', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/mOGfZtwLuTHmCdccq5tx_8hm9uAQnsyHScz8Fz8_nn0.jpg?width=640&crop=smart&auto=webp&s=20f20c193febf99b85ab4ad1b2fe12d51b85da38', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/mOGfZtwLuTHmCdccq5tx_8hm9uAQnsyHScz8Fz8_nn0.jpg?width=960&crop=smart&auto=webp&s=21ed6a9318cdbf0937fcfd3863645aa4f5bb2efe', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/mOGfZtwLuTHmCdccq5tx_8hm9uAQnsyHScz8Fz8_nn0.jpg?width=1080&crop=smart&auto=webp&s=ddb564844076bd65ae843dcb01d05fe3b373b7cc', 'width': 1080}], 'source': {'height': 627, 'url': 'https://external-preview.redd.it/mOGfZtwLuTHmCdccq5tx_8hm9uAQnsyHScz8Fz8_nn0.jpg?auto=webp&s=3fbb5b6cc1cd8cf44db82bbfc64e71aa007b552b', 'width': 1200}, 'variants': {}}]} |
Tip for writing stories: Don't use the word "Story" in your prompt | 135 | I've had a lot of issues when using LLMs to assist in writing story generations. Constantly, there seems to be a strong predisposition to write a conclusion to each section, write some moral that the character took away, or write about how they would overcome the challenges facing them and become stronger for their experience.
After a lot of telling models "don't moralize or conclude each section" and similar messages, to no great effect, I considered my experience with Stable Diffusion. Frequently when making prompts for generating images, even the mere inclusion of a word -- regardless of its negation -- can cause it to start influencing the result strongly.
In the case of LLMs and story generation, the word at issue seems to be "story". By removing it and instructing the model to "narrate events" instead, it cuts down on a lot of undesirable behavior significantly. I've found that using a system message like the following seems to work well for me:
You are an author narrating events based on the provided prompt below. Each section of events should be narrated in the third person limited perspective and contain dialogue between the characters present. The language should be straightforward and to the point. Each section should be left open for continuation.
I'd love to hear other peoples' experience writing with LLMs. It fills a bit of an uncommon space between roleplay and coding, so it doesn't seem to get quite as much attention. | 2024-01-07T15:56:37 | https://www.reddit.com/r/LocalLLaMA/comments/190v6iu/tip_for_writing_stories_dont_use_the_word_story/ | Stepfunction | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190v6iu | false | null | t3_190v6iu | /r/LocalLLaMA/comments/190v6iu/tip_for_writing_stories_dont_use_the_word_story/ | false | false | self | 135 | null |
llama.cpp CPU optimization | 25 | I love llama.cpp, so don't take this as a criticism of the project, but why does it peg every core to 100% when it's often waiting on IO anyway?
I have a 32 thread / 16 core CPU (Ryzen 3950x) and I did a test which shows that assigning 32 threads to do model inference is a complete waste of electricity. In fact running with less threads produces much better performance. Even just assigning 4 threads to inference produces better performance than 32 threads, and it actually matches performance with 16 threads. (8 threads being the optimum).
Can anyone familiar with the code base shed some light? Is this something that can be optimized? What is llama.cpp doing with all those CPU cycles? | 2024-01-07T15:53:19 | https://www.reddit.com/r/LocalLLaMA/comments/190v426/llamacpp_cpu_optimization/ | noiserr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190v426 | false | null | t3_190v426 | /r/LocalLLaMA/comments/190v426/llamacpp_cpu_optimization/ | false | false | self | 25 | null |
Deploy TinyLlama as an API endpoint | 1 | SLMs (Small Language Models) are going to be a lot more powerful in 2024. And there must be an easy way to deploy and consume them on-demand for building applications.
So I created a colab notebook that enables you to quickly deploy and access TinyLlama 1.1B model on MonsterAPI as an API endpoint. This model was trained on 1T tokens!
Additionally, the API endpoint is automatically configured for higher throughput with batching.
This would not only cost you way less than LLMs but also give you quick capability to build chat applications using this model to integrate in your business logic and workflows.
Try it here:
[https://colab.research.google.com/drive/1F7yI1oUbQ3Csvlt6wdiny\_N3KXfOYue4](https://colab.research.google.com/drive/1F7yI1oUbQ3Csvlt6wdiny_N3KXfOYue4) | 2024-01-07T15:49:38 | https://www.reddit.com/r/LocalLLaMA/comments/190v19e/deploy_tinyllama_as_an_api_endpoint/ | gvij | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190v19e | false | null | t3_190v19e | /r/LocalLLaMA/comments/190v19e/deploy_tinyllama_as_an_api_endpoint/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=216&crop=smart&auto=webp&s=2817183828c9747b960cb2e55c59cfa41f4f9ded', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?auto=webp&s=ed5da41e2c4cee7a9e495c8291ecf5604f0e169d', 'width': 260}, 'variants': {}}]} |
adding a second GPU - another 3090 or a 4090? | 1 | I'm going to buy a second GPU soon to get more VRAM. Right now I have an RTX 3090. I could afford a 4090, but I'm wondering if there are benefits to having a second 3090 and NVLink versus a 3090 and a 4090. Anyone have any insights? | 2024-01-07T15:19:09 | https://www.reddit.com/r/LocalLLaMA/comments/190udev/adding_a_second_gpu_another_3090_or_a_4090/ | FortunaVelutLuna | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190udev | false | null | t3_190udev | /r/LocalLLaMA/comments/190udev/adding_a_second_gpu_another_3090_or_a_4090/ | false | false | self | 1 | null |
Transform medical and research papers into knowledge bases with paperetl | 1 | 2024-01-07T15:18:28 | https://github.com/neuml/paperetl | davidmezzetti | github.com | 1970-01-01T00:00:00 | 0 | {} | 190ucva | false | null | t3_190ucva | /r/LocalLLaMA/comments/190ucva/transform_medical_and_research_papers_into/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Y4NWBFjNP3Ste9Mj_gsQpmzbF9z64vda_DCDA6v7Uw0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ACIAQhlEJQHfEMePku9DdfiCcfYRsbfDqu-rbsE5OZc.jpg?width=108&crop=smart&auto=webp&s=593de29ec4004625d11a17d8823eed57aedf57d4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ACIAQhlEJQHfEMePku9DdfiCcfYRsbfDqu-rbsE5OZc.jpg?width=216&crop=smart&auto=webp&s=7987f9f7e58ab26d1f9b9de94a5dfdf3cfc9c40f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ACIAQhlEJQHfEMePku9DdfiCcfYRsbfDqu-rbsE5OZc.jpg?width=320&crop=smart&auto=webp&s=a45af927eea60d700e0083d3e22775bba09b1107', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ACIAQhlEJQHfEMePku9DdfiCcfYRsbfDqu-rbsE5OZc.jpg?width=640&crop=smart&auto=webp&s=84ac8eb7912ccecca9b435fffe9a4b67f8c01d4a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ACIAQhlEJQHfEMePku9DdfiCcfYRsbfDqu-rbsE5OZc.jpg?width=960&crop=smart&auto=webp&s=ce47ceb7086d7f0e8a2edadd02bcb1a6e751c32b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ACIAQhlEJQHfEMePku9DdfiCcfYRsbfDqu-rbsE5OZc.jpg?width=1080&crop=smart&auto=webp&s=cfb2650ac1e39e4f9aca1c5d90a9117233783fd5', 'width': 1080}], 'source': {'height': 680, 'url': 'https://external-preview.redd.it/ACIAQhlEJQHfEMePku9DdfiCcfYRsbfDqu-rbsE5OZc.jpg?auto=webp&s=6db93d8e32bc8bf4f6e5b087b997534fb234bbe5', 'width': 1360}, 'variants': {}}]} | ||
[Question] Any updates on Flash Attention 2 build on Windows (Jan 2024)? | 1 | Hi all. A while ago, there was a discussion on making [Flash Attention 2 built on Windows](https://www.reddit.com/r/LocalLLaMA/comments/16skcwm/any_way_to_get_flash_attention_2_working_on/). However, I found that the information is currently obsolete, even though it is only about a few months old. I tried to reconcile the information and tried to build v2.4.2 of Flash Attention 2 and it did not compile. My setup is:
* Python 3.11
* Pytorch 2.1.2
* CUDA 12.1
* RTX 4090x2 with 192GB RAM. Intel i9 14900
Any kind soul would share either patched source files or wheel binaries? Thanks! | 2024-01-07T14:39:06 | https://www.reddit.com/r/LocalLLaMA/comments/190tich/question_any_updates_on_flash_attention_2_build/ | epicrob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190tich | false | null | t3_190tich | /r/LocalLLaMA/comments/190tich/question_any_updates_on_flash_attention_2_build/ | false | false | self | 1 | null |
has anyone analyzed how LLMs select 'random' stuff? | 7 | I am curious if there is any info out there on how a LLM choses randomness.
\`\`\`
Pick a random word from here:
\- apple
\- john
\- smith
\- freak
\- lion
\`\`\`
Tested on LoneStriker\_bagel-34b-v0.2-8.0bpw-h8-exl2 (never picked smith out of 30 tries, most were lion)
Tested on LoneStriker\_SUS-Chat-34B-8.0bpw-h8-exl2 (never picked john, but it picked smith after it first selected a random number from 1 to 5 then it selected 2 then it said 'smith', lewl)
​
The vast majority is either 'lion' because it's the last word and like us humans we tend to remember and put accent more on 'the last presented things'. Or because it likes 'lion'. I also tried it with something blant like 'gray' and the overwhelming majority is 'gray'.
I could play more with this, but I'm curious to run some old-fashioned inference on your beautiful minds. | 2024-01-07T14:37:45 | https://www.reddit.com/r/LocalLLaMA/comments/190thet/has_anyone_analyzed_how_llms_select_random_stuff/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190thet | false | null | t3_190thet | /r/LocalLLaMA/comments/190thet/has_anyone_analyzed_how_llms_select_random_stuff/ | false | false | self | 7 | null |
Convert BLOOM based model to gguf with llama.cpp | 1 | [removed] | 2024-01-07T14:37:38 | https://www.reddit.com/r/LocalLLaMA/comments/190thcb/convert_bloom_based_model_to_gguf_with_llamacpp/ | Christ0ph_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190thcb | false | null | t3_190thcb | /r/LocalLLaMA/comments/190thcb/convert_bloom_based_model_to_gguf_with_llamacpp/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'EuSxoJsoDqwQ__gF6pGtvT0fN-kj2YqM2WJYa1kBkN4', 'resolutions': [{'height': 36, 'url': 'https://external-preview.redd.it/fx_wZ2-ohfyb5W_ahn6ZVW3qOYRao3KFBtfsa9FyNBI.jpg?width=108&crop=smart&auto=webp&s=196af3f352b41ecb6a0526cf8cd2c41dceacd29c', 'width': 108}, {'height': 72, 'url': 'https://external-preview.redd.it/fx_wZ2-ohfyb5W_ahn6ZVW3qOYRao3KFBtfsa9FyNBI.jpg?width=216&crop=smart&auto=webp&s=0e189c00421a96406e1200815c09a13d1a4a5dae', 'width': 216}, {'height': 106, 'url': 'https://external-preview.redd.it/fx_wZ2-ohfyb5W_ahn6ZVW3qOYRao3KFBtfsa9FyNBI.jpg?width=320&crop=smart&auto=webp&s=933506e87df1045093385a12c4e7371af6d7dac6', 'width': 320}, {'height': 213, 'url': 'https://external-preview.redd.it/fx_wZ2-ohfyb5W_ahn6ZVW3qOYRao3KFBtfsa9FyNBI.jpg?width=640&crop=smart&auto=webp&s=fba945b12eaacd2498e17ae93ddd5822a9ae371c', 'width': 640}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/fx_wZ2-ohfyb5W_ahn6ZVW3qOYRao3KFBtfsa9FyNBI.jpg?auto=webp&s=6a4f7ad656c2d5861fb4528acd7cc99faed4fba0', 'width': 900}, 'variants': {}}]} | |
This is a bold prediction from Ilya Sutskever. Next token prediction = capable of superhuman performance. My question: does he specifically mean transformers here when he says base neural net? | 34 | 2024-01-07T13:57:14 | https://v.redd.it/pb8z8jyhx0bc1 | TysonUsykFury | /r/LocalLLaMA/comments/190sono/this_is_a_bold_prediction_from_ilya_sutskever/ | 1970-01-01T00:00:00 | 0 | {} | 190sono | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/pb8z8jyhx0bc1/DASHPlaylist.mpd?a=1707357449%2CMTA5YjMwNTBmMzE5ODcyMGFkYmZhZDc2MjZhMWYwYjZhMDdiNmNjYzM2YTE4OWQwYjAzNTcxYzdmNTQzZjU1ZA%3D%3D&v=1&f=sd', 'duration': 127, 'fallback_url': 'https://v.redd.it/pb8z8jyhx0bc1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/pb8z8jyhx0bc1/HLSPlaylist.m3u8?a=1707357449%2CYzA4OGI4Zjg1YjllZWM0YmJkZGQxNGEwYmRhMzUxZjMwMWZiOWY5MWRjZDc3YzkxZjEyNDIwOGI1ZTQyNDMzMA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/pb8z8jyhx0bc1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_190sono | /r/LocalLLaMA/comments/190sono/this_is_a_bold_prediction_from_ilya_sutskever/ | false | false | 34 | {'enabled': False, 'images': [{'id': 'NHlua3E3cGh4MGJjMUJqrwP9xesw1XpAJvjrZJyTxHbb9Si1bsVxYmYIdaA9', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NHlua3E3cGh4MGJjMUJqrwP9xesw1XpAJvjrZJyTxHbb9Si1bsVxYmYIdaA9.png?width=108&crop=smart&format=pjpg&auto=webp&s=6907c1f76814a7199b9ac66566b04dcc3ec69268', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/NHlua3E3cGh4MGJjMUJqrwP9xesw1XpAJvjrZJyTxHbb9Si1bsVxYmYIdaA9.png?width=216&crop=smart&format=pjpg&auto=webp&s=c23763b5e7d156b788bc4af0a3cebfc8be313503', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/NHlua3E3cGh4MGJjMUJqrwP9xesw1XpAJvjrZJyTxHbb9Si1bsVxYmYIdaA9.png?width=320&crop=smart&format=pjpg&auto=webp&s=0b2e10f55160cb944870fb0a62eb99fcbb4e4669', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/NHlua3E3cGh4MGJjMUJqrwP9xesw1XpAJvjrZJyTxHbb9Si1bsVxYmYIdaA9.png?width=640&crop=smart&format=pjpg&auto=webp&s=65edd08d4940a0766f39693cdff13737e74ad98d', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/NHlua3E3cGh4MGJjMUJqrwP9xesw1XpAJvjrZJyTxHbb9Si1bsVxYmYIdaA9.png?width=960&crop=smart&format=pjpg&auto=webp&s=68635d7f1a40cbfd4bb8d63b42f46ea93cbf8f87', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/NHlua3E3cGh4MGJjMUJqrwP9xesw1XpAJvjrZJyTxHbb9Si1bsVxYmYIdaA9.png?width=1080&crop=smart&format=pjpg&auto=webp&s=eedfa2a26b6c604e634c45b8f8bff98645f85785', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/NHlua3E3cGh4MGJjMUJqrwP9xesw1XpAJvjrZJyTxHbb9Si1bsVxYmYIdaA9.png?format=pjpg&auto=webp&s=758a88eac746bdf20c667f64c41f2b7b77acb819', 'width': 1080}, 'variants': {}}]} | ||
Can someone give an overview of prompting strategies like tree search and the like ? | 4 | I am currently getting in to the preparations for my doctor thesis. I am relatively new to llms. I need to run text evaluations to filter out specific information from the texts and was wondering how different agants/ prompting methods could be leveraged for soemthing like this.
And I was hoping someone would be sonkind to provide an overview over the different strategies and use cases. It would be extremely helpful. :)
If the question is stupid let me know why :D like i said still learning ^^
Thank you all in advance ^^ | 2024-01-07T13:17:48 | https://www.reddit.com/r/LocalLLaMA/comments/190rykd/can_someone_give_an_overview_of_prompting/ | Noxusequal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190rykd | false | null | t3_190rykd | /r/LocalLLaMA/comments/190rykd/can_someone_give_an_overview_of_prompting/ | false | false | self | 4 | null |
phi-2-GGUF outputs garbage | 3 | I downloaded 8bit quantized model: [https://huggingface.co/TheBloke/phi-2-GGUF](https://huggingface.co/TheBloke/phi-2-GGUF)
But it outputs not even "bad answers", it outputs garbage:
> Hello, how do I fix the car?
>
>AI
>
>{type: simple\_qa}
>
><|question|>Solve: -2x-1 <= x+5.8
>
><|question\_end|>Answer: -2.6
Or
> Please tell me a story about red riding hood
>
>AI
>
>Question 1: If a person is standing on the equator and facing east, which direction would they be looking at?
>
>Answer 1: They would be looking towards the west.
>
>Explanation: This has to do with the rotation of the Earth causing day and night. When you are on the equator, it's constantly rotating in one direction so if you face east, you will eventually end up facing west as the Earth rotates.
>
>Question 2: If a person is standing on the North Pole during summer solstice, what time would they experience midnight?
>
>Answer 2: They would not experience midnight because there are no days or nights at the poles due to constant sunlight and darkness respectively.
And every question like that, not even single one answer about what I ask.
At this point what I even looking at? I loading it using standard settings with text-generation-webui, I tried changing temperature but it fixes nothing.
I can not understand what is wrong. | 2024-01-07T12:45:18 | https://www.reddit.com/r/LocalLLaMA/comments/190rdj1/phi2gguf_outputs_garbage/ | uti24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190rdj1 | false | null | t3_190rdj1 | /r/LocalLLaMA/comments/190rdj1/phi2gguf_outputs_garbage/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'tn1ckeqsrgkW9jW-8WsYD4NdP_SIK1Y4pfa3EDSxFek', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0Br4yRVWMR7O3MqEgYuFvKJox2nri8zNRzhse9Sd2Ws.jpg?width=108&crop=smart&auto=webp&s=e956afb3e244d6ed2d92dcc2d5912b1e5ff5ca99', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0Br4yRVWMR7O3MqEgYuFvKJox2nri8zNRzhse9Sd2Ws.jpg?width=216&crop=smart&auto=webp&s=d0ed6169bd92801ee45cfdb92df3ccb0aa2c7508', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0Br4yRVWMR7O3MqEgYuFvKJox2nri8zNRzhse9Sd2Ws.jpg?width=320&crop=smart&auto=webp&s=c26faa8c3c836713428c15fd32a792aae8a7d76b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0Br4yRVWMR7O3MqEgYuFvKJox2nri8zNRzhse9Sd2Ws.jpg?width=640&crop=smart&auto=webp&s=d21db2fb742e33f88b5a25afd4d9ff8d83e87f3c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0Br4yRVWMR7O3MqEgYuFvKJox2nri8zNRzhse9Sd2Ws.jpg?width=960&crop=smart&auto=webp&s=a361b944f4a61324f57ab4987c06263eea32d677', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0Br4yRVWMR7O3MqEgYuFvKJox2nri8zNRzhse9Sd2Ws.jpg?width=1080&crop=smart&auto=webp&s=a8b23d9120dfd895a2b6e770982efec75c8b9036', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0Br4yRVWMR7O3MqEgYuFvKJox2nri8zNRzhse9Sd2Ws.jpg?auto=webp&s=dab830b31ddab1e0179be8f6cfeaad081344f245', 'width': 1200}, 'variants': {}}]} |
Pre-training Mistral | 1 | Is it possible to do pre-training on Mistral? If so are there any guides on how to do this? | 2024-01-07T12:36:49 | https://www.reddit.com/r/LocalLLaMA/comments/190r8io/pretraining_mistral/ | BukHunt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190r8io | false | null | t3_190r8io | /r/LocalLLaMA/comments/190r8io/pretraining_mistral/ | false | false | self | 1 | null |
Long Context Recall Pressure Test - Batch 2 | 65 | **Approach**: Using Gregory Kamradt's **"Needle In A Haystack"** analysis, I explored models with different context lengths.
\- Needle: "What's the most fun thing to do in San Francisco?"
\- Haystack: Essays by Paul Graham
**Video explanation** by Gregory - [https://www.youtube.com/watch?v=KwRRuiCCdmc](https://www.youtube.com/watch?v=KwRRuiCCdmc)
**Batch 1 -** [**https://www.reddit.com/r/LocalLLaMA/comments/18s61fb/pressuretested\_the\_most\_popular\_opensource\_llms/**](https://www.reddit.com/r/LocalLLaMA/comments/18s61fb/pressuretested_the_most_popular_opensource_llms/?sort=new)
**UPDATE 1** \- Thankyou all for your response. I will continue to update newer models / finetunes here as they keep coming. Feel free to post any suggestions or models you’d want in the comments
**UPDATE 2** \- Updated some more models including original tests from Greg as requested. As suggested in the original post comments I am brainstorming more tests for long context models. If you have any suggestions please comment. **Batch 1 & below tests are run on temp=0.0, tests with different temperatures and quantised models coming soon**...
**Models tested**
1️⃣ **16k Context Length (\~ 24 pages/12k words)**
​
https://preview.redd.it/jp86kfsjh0bc1.png?width=3024&format=png&auto=webp&s=4f5f22ae5ca19ca7007effd75436163507f8445f
https://preview.redd.it/83vx4isjh0bc1.png?width=3024&format=png&auto=webp&s=dc09da79c065fe5fb3677141daacb83b546c8da1
https://preview.redd.it/3y6uohsjh0bc1.png?width=3024&format=png&auto=webp&s=1c8e0247cede9ae0d22b39dbef2aac2d181f00c6
2️⃣ **32k Context Length (\~ 48 pages/24k words)**
​
https://preview.redd.it/llwefhjmh0bc1.png?width=3024&format=png&auto=webp&s=21c8be3e49f7ec8f7607367f535219b8c711da5a
https://preview.redd.it/7c0o3ijmh0bc1.png?width=3024&format=png&auto=webp&s=87a982009cf9494e01dbd629f7f7e37ef69f48e3
https://preview.redd.it/86y3y25ph0bc1.png?width=3024&format=png&auto=webp&s=80f1ef429adbfd979029456b5fa1e700bd76578e
https://preview.redd.it/8f9cj45ph0bc1.png?width=3024&format=png&auto=webp&s=c9d7b0a8fda9308f73361a50dabf6de4c59e7bb7
https://preview.redd.it/qugd635ph0bc1.png?width=3024&format=png&auto=webp&s=ae0def04239940eefb28e68d10942464d56e84a3
https://preview.redd.it/i8t5w45ph0bc1.png?width=3024&format=png&auto=webp&s=c4891f3e7c16ae4015d6aa959db3d484c89b7fd5
https://preview.redd.it/ydiad35ph0bc1.png?width=3024&format=png&auto=webp&s=127cfe59f84354b0da660de5f26724c4fa2e230a
3️⃣ **128k Context Length (\~ 300 pages/150k words)**
​
https://preview.redd.it/1kvlhgerh0bc1.png?width=1712&format=png&auto=webp&s=25af8dd77133b13a1291a7d5c417aa47a0bbf2f0
4️⃣ **200k Context Length (\~ 300 pages/150k words)**
​
https://preview.redd.it/axqhqvsth0bc1.png?width=3024&format=png&auto=webp&s=1abb1dc4c8e5e0c8ab3e928a4a4c6bacb0ef7eee
https://preview.redd.it/9hbafwsth0bc1.png?width=3024&format=png&auto=webp&s=5a191356e9be1a8f3afe42679b3f871cb79532b0
https://preview.redd.it/poghivsth0bc1.png?width=3024&format=png&auto=webp&s=5f2adbde259748714e4e8cd7f5bf7402af70bb60
https://preview.redd.it/wz1g5usth0bc1.png?width=1626&format=png&auto=webp&s=63bdad809e721aeb4dbd96f82cec388bbeb30529
[Anthropic's run with their prompt](https://preview.redd.it/qe7m7u3yh0bc1.png?width=1086&format=png&auto=webp&s=d93e6c857368033151db607f6d69122250ad5ce2) | 2024-01-07T12:31:05 | https://www.reddit.com/r/LocalLLaMA/comments/190r59u/long_context_recall_pressure_test_batch_2/ | ramprasad27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190r59u | false | {'oembed': {'author_name': 'Greg Kamradt (Data Indy)', 'author_url': 'https://www.youtube.com/@DataIndependent', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/KwRRuiCCdmc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="I pressure tested GPT-4's 128K context retrieval"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/KwRRuiCCdmc/hqdefault.jpg', 'thumbnail_width': 480, 'title': "I pressure tested GPT-4's 128K context retrieval", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_190r59u | /r/LocalLLaMA/comments/190r59u/long_context_recall_pressure_test_batch_2/ | false | false | 65 | {'enabled': False, 'images': [{'id': 'OondSl4tj1KDXy8pkUsxQBZHwCiKekJ00sdXFeRLzns', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/k9s_HDytdOH3hMxsYPFIXN-bvkyOM7NyMIrsfE7ys2A.jpg?width=108&crop=smart&auto=webp&s=867a03d49cf317f8ccd573a6cbf632592dcee0e9', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/k9s_HDytdOH3hMxsYPFIXN-bvkyOM7NyMIrsfE7ys2A.jpg?width=216&crop=smart&auto=webp&s=6d08c1f2d892455f6944633d3540b2e4d928ce60', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/k9s_HDytdOH3hMxsYPFIXN-bvkyOM7NyMIrsfE7ys2A.jpg?width=320&crop=smart&auto=webp&s=a295ec5982dde9c474d7a6954d572e57d11d0350', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/k9s_HDytdOH3hMxsYPFIXN-bvkyOM7NyMIrsfE7ys2A.jpg?auto=webp&s=cfa706e70f02ee02ecacb6c04386b5c69692a38e', 'width': 480}, 'variants': {}}]} | |
which model I can launch with a powerful phone ? | 1 | I have a realme gt 5 with 24 GB of ram + 12 virtually with a snapdragon 8 gen 2 do you have an idea of the best models to use? and also how, I am open to new techniques rather than continuing to use llama.cpp with termux... | 2024-01-07T12:10:23 | Terrible_Vegetable4 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 190qtng | false | null | t3_190qtng | /r/LocalLLaMA/comments/190qtng/which_model_i_can_launch_with_a_powerful_phone/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'kK55OdTMW9KALcQ9InAUMEfBZWU4gQe55Lu9Td5NK4s', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/5qzqaqmge0bc1.jpeg?width=108&crop=smart&auto=webp&s=a8f68f738fd035403555a04ba482160a99de1a66', 'width': 108}, {'height': 220, 'url': 'https://preview.redd.it/5qzqaqmge0bc1.jpeg?width=216&crop=smart&auto=webp&s=08075123d103caa844df614a1b2674593e580cd6', 'width': 216}, {'height': 325, 'url': 'https://preview.redd.it/5qzqaqmge0bc1.jpeg?width=320&crop=smart&auto=webp&s=76ddf39c0f830401bd34ac12d42d109019ed4a96', 'width': 320}, {'height': 651, 'url': 'https://preview.redd.it/5qzqaqmge0bc1.jpeg?width=640&crop=smart&auto=webp&s=9e68936c1a80a83b8de91a089018e5607a345b5c', 'width': 640}, {'height': 977, 'url': 'https://preview.redd.it/5qzqaqmge0bc1.jpeg?width=960&crop=smart&auto=webp&s=316dec07196a9ca4eaed3cd1118ba68d4c292bea', 'width': 960}, {'height': 1100, 'url': 'https://preview.redd.it/5qzqaqmge0bc1.jpeg?width=1080&crop=smart&auto=webp&s=c0cc7ba5c3d915a83e4d0c1af223fb048ff6d5da', 'width': 1080}], 'source': {'height': 1263, 'url': 'https://preview.redd.it/5qzqaqmge0bc1.jpeg?auto=webp&s=bec2bbeba7eefbe59d49e169913a4f0f73196d07', 'width': 1240}, 'variants': {}}]} | ||
Open_Gpt4_8x7B -> This close to GPT4 locally??? | 1 | I am just thoroughly impressed with this model! It is based on Mixtral, but finetuned really nicely!
It even gets the notorious:
>I have 4 apples today. I ate 3 apples yesterday. How many apples do I have today?
Correct!
[smart co0k1e](https://preview.redd.it/2gdf6fsqb0bc1.png?width=1336&format=png&auto=webp&s=38566acc16d81af8510e3da510de63a902b16247)
Orig. model: [https://huggingface.co/rombodawg/Open\_Gpt4\_8x7B](https://huggingface.co/rombodawg/Open_Gpt4_8x7B)
Bloke's quant: [https://huggingface.co/TheBloke/Open\_Gpt4\_8x7B-GGUF](https://huggingface.co/TheBloke/Open_Gpt4_8x7B-GGUF)
I continue testing hard with my collection of testing prompts and it nails even some other difficult prompts. I was kinda expecting we might get this close to GPT4 in 2024, but boy we are not even past middle of January!
What do you think about this, am I hallucinating or something? Kudos to all the devs involved! Amazing job folks! | 2024-01-07T11:59:09 | https://www.reddit.com/r/LocalLLaMA/comments/190qn4l/open_gpt4_8x7b_this_close_to_gpt4_locally/ | nodating | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190qn4l | false | null | t3_190qn4l | /r/LocalLLaMA/comments/190qn4l/open_gpt4_8x7b_this_close_to_gpt4_locally/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'tmCNFAyh3_qCRhWXfTfOjTbmFmMkClsPbrrWgL5Lgxo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/F6C_QAhR1lDqwQ7C9EyXXnLqGVoFc-KcdGtmb9uXO7Q.jpg?width=108&crop=smart&auto=webp&s=9b2b5d3d22dc2c6e7f53a8eca8a5d61df6b2e562', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/F6C_QAhR1lDqwQ7C9EyXXnLqGVoFc-KcdGtmb9uXO7Q.jpg?width=216&crop=smart&auto=webp&s=f88606f0bda52e193826fdf8d53f94d245e46f2f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/F6C_QAhR1lDqwQ7C9EyXXnLqGVoFc-KcdGtmb9uXO7Q.jpg?width=320&crop=smart&auto=webp&s=3fae7aa1fa38bf958ef80a5ff192157ae2ef2963', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/F6C_QAhR1lDqwQ7C9EyXXnLqGVoFc-KcdGtmb9uXO7Q.jpg?width=640&crop=smart&auto=webp&s=eddeb7de7145030f4832f03cb35be59e7a949abf', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/F6C_QAhR1lDqwQ7C9EyXXnLqGVoFc-KcdGtmb9uXO7Q.jpg?width=960&crop=smart&auto=webp&s=4f84b98dda7c694339716af4cdd56048c3c4739a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/F6C_QAhR1lDqwQ7C9EyXXnLqGVoFc-KcdGtmb9uXO7Q.jpg?width=1080&crop=smart&auto=webp&s=302d64cf44931642aa9b3e2c08ad5c9a99b85e10', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/F6C_QAhR1lDqwQ7C9EyXXnLqGVoFc-KcdGtmb9uXO7Q.jpg?auto=webp&s=8e0329e7e3a0ed73fc0e6bc4928b558d7dcfe39f', 'width': 1200}, 'variants': {}}]} | |
Ideas for a future AL/LLM startup? | 1 | There have been many exciting companies related to LLM that have come into existence in the past 1 year or so.
What are the current pain points which are yet to be addressed properly which a future startup can work on?
It can be both in terms of foundational or applied LLM. | 2024-01-07T11:57:35 | https://www.reddit.com/r/LocalLLaMA/comments/190qmba/ideas_for_a_future_alllm_startup/ | satyajitdass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190qmba | false | null | t3_190qmba | /r/LocalLLaMA/comments/190qmba/ideas_for_a_future_alllm_startup/ | false | false | self | 1 | null |
After scraping the data of every company on the internet, OpenAI suspended Bytedance for using GPT-4 generated data. Then they have the gall to say the reason is that their technology should be used for GOOD. If using GPT-4 generated data makes Bytedance malevolent, then why even provide API access? | 48 | https://nypost.com/2023/12/18/business/openai-suspends-bytedances-account-after-it-allegedly-used-gpt-to-build-rival-ai-product-report/ | 2024-01-07T11:44:03 | TysonUsykFury | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 190qf1y | false | null | t3_190qf1y | /r/LocalLLaMA/comments/190qf1y/after_scraping_the_data_of_every_company_on_the/ | false | false | 48 | {'enabled': True, 'images': [{'id': 'kU8GtmFIn4H3RNEOZXQHbqaIc0t9sPDmsTp9tiBROD4', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/0n52l7er90bc1.png?width=108&crop=smart&auto=webp&s=1c71570599bb1251c797d70a52b25ddff5ed4bb7', 'width': 108}, {'height': 158, 'url': 'https://preview.redd.it/0n52l7er90bc1.png?width=216&crop=smart&auto=webp&s=90db9694ccac565684bab7ed9f87a1bc135410be', 'width': 216}, {'height': 234, 'url': 'https://preview.redd.it/0n52l7er90bc1.png?width=320&crop=smart&auto=webp&s=dd1d5ae56432d3c5809f9603f71e0eb0e252a877', 'width': 320}, {'height': 468, 'url': 'https://preview.redd.it/0n52l7er90bc1.png?width=640&crop=smart&auto=webp&s=bce3b795faac61b5bd891da97be7d9393819f613', 'width': 640}, {'height': 702, 'url': 'https://preview.redd.it/0n52l7er90bc1.png?width=960&crop=smart&auto=webp&s=f20fb77c43348c179a1b390a89df3b97ae8763ea', 'width': 960}, {'height': 790, 'url': 'https://preview.redd.it/0n52l7er90bc1.png?width=1080&crop=smart&auto=webp&s=67713aef5bdd0b167b622a48755b143a1f7204fd', 'width': 1080}], 'source': {'height': 790, 'url': 'https://preview.redd.it/0n52l7er90bc1.png?auto=webp&s=a52d7586d101726ef95b8e94dd2fa8a0408783a3', 'width': 1080}, 'variants': {}}]} | ||
What's the best model to use for websites to turn them into q and a ? | 2 | Hey guys,
So I was developing something like sitegpt but using open source models.
I am sorry i am totally a newbie into the field of AI.
but please try to point me in the right direction and i will do more research.
It's a SAAS product so multiple ecommerce websites have to be trained and the bot will answer questions related to products on that website.
I want it to very fast so I am thinking of models which aren't trained on much data and I guess my use case doesn't require it to be trained on a lot of information. What's the best way for it? | 2024-01-07T11:39:32 | https://www.reddit.com/r/LocalLLaMA/comments/190qcnt/whats_the_best_model_to_use_for_websites_to_turn/ | AnabelBain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190qcnt | false | null | t3_190qcnt | /r/LocalLLaMA/comments/190qcnt/whats_the_best_model_to_use_for_websites_to_turn/ | false | false | self | 2 | null |
Real time Arxiv API + RAG at inference | 5 | Hi everyone,
I’m quite new to the space and as a first project, I’m attempting to create an arxiv chat bot. I’m aware that this kind of bot is something that’s been done many times but I haven’t seen anything with the approach.
Most approaches I came across involves querying the arxiv api and using mainly the abstract as context. My approach involves parsing the pdf along with some of its references, chunking it, and using RAG to retrieve the context. Has anyone implement anything similar?
I’m currently doing langchain to do all of this but any suggestions of a better framework would be nice! | 2024-01-07T11:35:38 | https://www.reddit.com/r/LocalLLaMA/comments/190qaol/real_time_arxiv_api_rag_at_inference/ | Candlespeaker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190qaol | false | null | t3_190qaol | /r/LocalLLaMA/comments/190qaol/real_time_arxiv_api_rag_at_inference/ | false | false | self | 5 | null |
Can anybody confirm that this speed is normal? | 1 | I'm running koboldcpp with the following parameters for a 7b q8\_0 gguf model I tried:
--usecublas --quiet --threads 10 --gpulayers 999 --contextsize 2048
I'm getting 54t/s on a rtx 3090 but I've seen people pull off [155t/s on a 4080](https://github.com/ggerganov/llama.cpp/pull/4773). This seems weird to me, the 4080 certainly isn't 3 times faster than a 3090. I also tried a q2\_K gguf but there's barely any difference.
Thanks! | 2024-01-07T11:31:35 | https://www.reddit.com/r/LocalLLaMA/comments/190q8lu/can_anybody_confirm_that_this_speed_is_normal/ | OuchieOnChin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190q8lu | false | null | t3_190q8lu | /r/LocalLLaMA/comments/190q8lu/can_anybody_confirm_that_this_speed_is_normal/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'aEPQc_sb6LYdT-qn9qPfnIklsRQ3gZARWeTYh5VB62g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nhzrW2NkjbrptAvo0AnWruzc-DkuDVdonC9pvD_9CBA.jpg?width=108&crop=smart&auto=webp&s=91d652f58c951d7a00ed55dd5777db4ecc4a9246', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nhzrW2NkjbrptAvo0AnWruzc-DkuDVdonC9pvD_9CBA.jpg?width=216&crop=smart&auto=webp&s=f1d2fd84044ff5fa2d8829e25710de23ba43870e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nhzrW2NkjbrptAvo0AnWruzc-DkuDVdonC9pvD_9CBA.jpg?width=320&crop=smart&auto=webp&s=84934c498e268d726f4fffba19c4473f95254285', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nhzrW2NkjbrptAvo0AnWruzc-DkuDVdonC9pvD_9CBA.jpg?width=640&crop=smart&auto=webp&s=37a9499e50dc5ded30a8d9b81e1044fdd31387ac', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nhzrW2NkjbrptAvo0AnWruzc-DkuDVdonC9pvD_9CBA.jpg?width=960&crop=smart&auto=webp&s=a3444eb79f5d2fa14d7224aec79d4f6769d2fd20', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nhzrW2NkjbrptAvo0AnWruzc-DkuDVdonC9pvD_9CBA.jpg?width=1080&crop=smart&auto=webp&s=e992c2c9c85a67dec940406065e5c96d6da28101', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nhzrW2NkjbrptAvo0AnWruzc-DkuDVdonC9pvD_9CBA.jpg?auto=webp&s=5e69a854cd86a7a727df5e3f6b3186416952e8f6', 'width': 1200}, 'variants': {}}]} |
My theory: Microsoft changed Phi-2's license to MIT so they could one up Google and their closed source Gemini Nano model. 🤣 Microsoft may not have the smartphone market but Google's competitors like Samsung, Huawei, Xiaomi and even Apple do and now they can use Phi-2. | 36 | 2024-01-07T10:57:51 | https://www.reddit.com/gallery/190pqi5 | TysonUsykFury | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 190pqi5 | false | null | t3_190pqi5 | /r/LocalLLaMA/comments/190pqi5/my_theory_microsoft_changed_phi2s_license_to_mit/ | false | false | 36 | null | ||
Anybody want to test its viability on much larger models? | 21 | 2024-01-07T10:37:43 | https://twitter.com/sdand/status/1743695855545426362 | Snoo_64233 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 190pgcz | false | null | t3_190pgcz | /r/LocalLLaMA/comments/190pgcz/anybody_want_to_test_its_viability_on_much_larger/ | false | false | default | 21 | null | |
Solution to transcribe Arabic | 1 | Hey everyone,
I am trying to help a local event organizer in creating a solution that would transcribe Arabic and auto translate and display the translation on a screen. I saw whisper from OAI, but wondering if there is a lesser known other one in the wild that i skipped.
Appreciate insights and thoughts,
Thanks
​ | 2024-01-07T10:34:25 | https://www.reddit.com/r/LocalLLaMA/comments/190per1/solution_to_transcribe_arabic/ | AmeenRoayan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190per1 | false | null | t3_190per1 | /r/LocalLLaMA/comments/190per1/solution_to_transcribe_arabic/ | false | false | self | 1 | null |
Training Loss goes from being in Quadrillions to Zero | 1 | I was recently trying to fine tune CodeLlama model for generating code from text. I tried to replicate the code of WizardCoder for this and just changed the model name (it uses starcoder as the base model ig) and some other tweaks.
When I trained the model, the validation loss always came as Nan and the training Loss jumped from being in quadrillions to zero in just one epoch. I guess I am doing something wrong here but I am unable to figure out what. Has anybody faced anything similar or maybe can just point me out to things I can look for to assess? | 2024-01-07T10:30:04 | https://www.reddit.com/r/LocalLLaMA/comments/190pcjq/training_loss_goes_from_being_in_quadrillions_to/ | aayusss21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190pcjq | false | null | t3_190pcjq | /r/LocalLLaMA/comments/190pcjq/training_loss_goes_from_being_in_quadrillions_to/ | false | false | self | 1 | null |
Shoutout to a great RP model | 58 | Howdy everyone! I’ve been lurking this sub for quite some time now, always checking recommendations and tests for the new models, especially interested in those which fare well in roleplaying.
I am that sad type of an individual who has a convoluted, long-ass narrative roleplay (over two thousand messages and still going) in a group chat with the bots, so I’m always in search for models that write good prose. And recently, I stumbled onto an absolutely amazing hidden gem of a model that hasn’t been mentioned here once, so hey, here it is.
The model in question (big thanks to Doctor-Shotgun for this one): https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B
And yes, in my opinion, it might be even better than Capy-Tess-Yi in terms of writing. And the extended context of 200k works absolutely beautifully, I run 45k context and the bots remember everything - heck, it works even better than Capy-Tess-Yi in that regard, though it might be thanks to the new exl2 format that I’m using (shoutout to LoneStriker).
But the most important part is that the characters stay in character, even given that this is a group chat. What I’ve noticed in other models is that they would often mix and blend different personalities or the character gets muddled after reaching the full context - well, not in this case (mind you, I’ve been staying in full context for quite some time now, constantly introducing new characters to the ongoing story too). The villains are also VERY evil and when you tell the model that there is no plot armor - it treats that statement seriously (my character was brutally murdered in cold blood at least once at this point, thank the gods that retcons exist). Also worth noting that characters have no issues with interacting with each other and they are capable of progressing the plot on their own. I think it’s the first model for me with which I allow the AI to write with itself freely, while I simply read and enjoy the ride, munching on some popcorn.
I also absolutely love this model for how great it is with introspective narrative of which I’m a big fan of. And the way it handles humor, similes and metaphors? Absolutely perfect. It also reacts to subtle requests such as “where does she finds herself now” with provided envoiremental description - something that I never managed to get on Mixtral Instruct (which pains me greatly, since that model holds so much potential, but sucks at more prose-oriented writing for now). You can check the examples of how it writes in the screenshots attached to this post - mind the cringe though, apologies for it in advance. Plus, there is a funny bonus thrown in there.
In terms of NSFW content, the model is also handling it great - it has no issues with swear words and describing even more niche fetishes or some more gory scenes. Hell, it especially goes wild with villain characters, sometimes making me audibly go “Jesus Christ” after presenting me with an output. And I was surprised how slowly and naturally it progreses sex scenes.
So, if you’re looking for an amazing model for longer, narrative roleplays, I recommend picking this one. I give it a solid 9/10, the score is not a full 10 due to it sometimes creating outputs with too much purple prose to my liking, or misinterpreting things from time to time, but it’s nothing a quick reroll can’t fix.
If you want my settings, Instruct or my Story String, just let me know in the comments! | 2024-01-07T10:28:36 | https://www.reddit.com/gallery/190pbtn | Meryiel | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 190pbtn | false | null | t3_190pbtn | /r/LocalLLaMA/comments/190pbtn/shoutout_to_a_great_rp_model/ | false | false | 58 | null | |
Any recommendation framework for enterprise conversational ai solutions ik rasa has llm based CA other than this is there any opensource out there | 2 | Langchain, llamaindex, haystack these are all for research. Any open source for building Conversational AI for buisnesses | 2024-01-07T10:26:17 | https://www.reddit.com/r/LocalLLaMA/comments/190pan7/any_recommendation_framework_for_enterprise/ | mrripo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190pan7 | false | null | t3_190pan7 | /r/LocalLLaMA/comments/190pan7/any_recommendation_framework_for_enterprise/ | false | false | self | 2 | null |
Text To Speech model that can handle special chars and numbers | 1 | Hello. I am experimenting with text to speech synthesis by following this article on Huggingface [https://huggingface.co/docs/transformers/main/en/tasks/text-to-speech](https://huggingface.co/docs/transformers/main/en/tasks/text-to-speech). I am using "Matthijs/cmu-arctic-xvectors" embeddings dataset for speaker embedding and "microsoft/speecht5\_tts" model for text-to-speech conversion.
However, I am running into an issue wherein if the text contains numbers, dates, or special characters, the model either skips them entirely or adds some random noise into the audio stream.
Has anyone done something similar to handle such use case? Or is there some other approach I should take? I am open to any suggestions or resources that can help me fix this issue.
I have tried various approaches like using a library called num2words to convert numbers to their corresponding spoken word. But I feel there should be some better way to do this. | 2024-01-07T09:49:17 | https://www.reddit.com/r/LocalLLaMA/comments/190oryh/text_to_speech_model_that_can_handle_special/ | These-Butterfly8819 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190oryh | false | null | t3_190oryh | /r/LocalLLaMA/comments/190oryh/text_to_speech_model_that_can_handle_special/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=108&crop=smart&auto=webp&s=6c2099a4a9a69e9793ac03aec2e167bf75ab3eae', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=216&crop=smart&auto=webp&s=dcabb3007e27f246939f2505509da0bf9f06e3cb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=320&crop=smart&auto=webp&s=a41020cb42a130c35ac33053b5fe88d8fe248e1e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=640&crop=smart&auto=webp&s=346df50928db41b093b4e923255493f6937674d1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=960&crop=smart&auto=webp&s=891f7f0662a0311d7e83f06f6dc0f9b3f51104de', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?width=1080&crop=smart&auto=webp&s=dd2a0868f88770dba1f18821573ea10e7912b0e7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IfMbJOxNUMUzsY1dCStb5zfgtucewbklqG7kssYhOLc.jpg?auto=webp&s=e9a1cfc66ec990bd227118e1de3ff3c3f26d0c83', 'width': 1200}, 'variants': {}}]} |
llama.cpp might soon get real 2bit quants | 111 | 2024-01-07T09:33:07 | https://github.com/ggerganov/llama.cpp/pull/4773 | cztomsik | github.com | 1970-01-01T00:00:00 | 0 | {} | 190ojju | false | null | t3_190ojju | /r/LocalLLaMA/comments/190ojju/llamacpp_might_soon_get_real_2bit_quants/ | false | false | 111 | {'enabled': False, 'images': [{'id': 'aEPQc_sb6LYdT-qn9qPfnIklsRQ3gZARWeTYh5VB62g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nhzrW2NkjbrptAvo0AnWruzc-DkuDVdonC9pvD_9CBA.jpg?width=108&crop=smart&auto=webp&s=91d652f58c951d7a00ed55dd5777db4ecc4a9246', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nhzrW2NkjbrptAvo0AnWruzc-DkuDVdonC9pvD_9CBA.jpg?width=216&crop=smart&auto=webp&s=f1d2fd84044ff5fa2d8829e25710de23ba43870e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nhzrW2NkjbrptAvo0AnWruzc-DkuDVdonC9pvD_9CBA.jpg?width=320&crop=smart&auto=webp&s=84934c498e268d726f4fffba19c4473f95254285', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nhzrW2NkjbrptAvo0AnWruzc-DkuDVdonC9pvD_9CBA.jpg?width=640&crop=smart&auto=webp&s=37a9499e50dc5ded30a8d9b81e1044fdd31387ac', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nhzrW2NkjbrptAvo0AnWruzc-DkuDVdonC9pvD_9CBA.jpg?width=960&crop=smart&auto=webp&s=a3444eb79f5d2fa14d7224aec79d4f6769d2fd20', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nhzrW2NkjbrptAvo0AnWruzc-DkuDVdonC9pvD_9CBA.jpg?width=1080&crop=smart&auto=webp&s=e992c2c9c85a67dec940406065e5c96d6da28101', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nhzrW2NkjbrptAvo0AnWruzc-DkuDVdonC9pvD_9CBA.jpg?auto=webp&s=5e69a854cd86a7a727df5e3f6b3186416952e8f6', 'width': 1200}, 'variants': {}}]} | ||
7b vs 13b vs 33b for basic tasks + fine tuning (which one to choose) | 13 | I recently got started with local llms and i'm looking to use them for just one basic task (for now) : writing (all kinds, from product descriptions to poetry).
I'm not expecting models to perform the way i want them right out of the box - so i'm ready to do some fine tuning - already started with mistral 7b and the fine tuned outputs are close to perfect (so i'm pretty happy with a 7b model for now).
Just wanted to know if there's any benefit in using / fine tuning a larger model. I understand that larger models understand tasks better, and can understand more complex stuff - but i don't need complex stuff, and since i can fine tune, i don't really need them to understand anything (well... not really, but still...). And i saw this regarding llama : We trained LLaMA 65B and LLaMA 33B on 1.4 trillion tokens. Our smallest model, LLaMA 7B, is trained on one trillion tokens. So the models, even though the have more parameters, are trained on a similar amount of tokens.
So regarding my use case (writing), does a bigger model have significantly more data? Or does it just understands tasks better? Any benefits of using a larger model vs a small one (considering that i'll need to fine tune them both)? | 2024-01-07T09:19:11 | https://www.reddit.com/r/LocalLLaMA/comments/190ocf9/7b_vs_13b_vs_33b_for_basic_tasks_fine_tuning/ | yupignome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190ocf9 | false | null | t3_190ocf9 | /r/LocalLLaMA/comments/190ocf9/7b_vs_13b_vs_33b_for_basic_tasks_fine_tuning/ | false | false | self | 13 | null |
Has anyone tried using AI Agents to use browser dev tools and troubleshoot or work on CSS or JS? | 1 | [removed] | 2024-01-07T08:53:50 | https://www.reddit.com/r/LocalLLaMA/comments/190nzk4/has_anyone_tried_using_ai_agents_to_use_browser/ | TheTwelveYearOld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 190nzk4 | false | null | t3_190nzk4 | /r/LocalLLaMA/comments/190nzk4/has_anyone_tried_using_ai_agents_to_use_browser/ | false | false | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.