title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Kani: A Lightweight Highly Hackable Open-Source Framework for Building Chat Applications with Tool Usage (e.g. Plugins)
3
[removed]
2023-09-12T17:29:12
https://www.reddit.com/r/LocalLLaMA/comments/16gxg98/kani_a_lightweight_highly_hackable_opensource/
Liam-Dugan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gxg98
false
null
t3_16gxg98
/r/LocalLLaMA/comments/16gxg98/kani_a_lightweight_highly_hackable_opensource/
false
false
self
3
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]}
Open source blockchain platform for distributed llama model computation and training.
1
[removed]
2023-09-12T17:06:47
https://CamelidCoin.org
CamelidCoin
camelidcoin.org
1970-01-01T00:00:00
0
{}
16gwvn1
false
null
t3_16gwvn1
/r/LocalLLaMA/comments/16gwvn1/open_source_blockchain_platform_for_distributed/
false
false
default
1
null
Need help deciding on a CPU + Motherboard for running LLMs
3
Currently I already have * Lian Li O11 Dynamic Evo * Corsair HX1500i * 2TB Saumsung 980 Pro M.2 SSD * 1x4090 (planning to add another 4090/3090) I want the option to add another 3090/4090, another M2 SSD, and add up to 128GB of RAM in the future. I dont see myself getting a 3rd GPU unless its highly recommended but seems like most people go for dual GPU on this reddit. I'm trying to decide on one of the following PC bundles since they seem to be good value for money and in stock near me 1. [$400](https://www.microcenter.com/product/5006269/amd-ryzen-7-7700x,-msi-b650-p-pro-wifi,-gskill-flare-x5-series-32gb-ddr5-6000-kit,-computer-build-bundle) \- Ryzen 7 7700X (24 PCIE lanes) + MSI B650-P Pro WiFi (2x Gen4.0x16 + 2x Gen3.0x16) + G.Skill Flare X5 Series 32GB DDR5-6000 2. [$550](https://www.microcenter.com/product/5006461/intel-core-i7-13700k,-asus-z790-p-prime-wifi-ddr5,-gskill-32gb-ddr5-6000-kit,-computer-build-bundle) \- i7-13700K (20 PCIE lanes) + ASUS Z790-P Prime (1x Gen5.0x16 + 3x Gen4.0x16) + G.Skill 32GB DDR5-6000 3. [$600](https://www.microcenter.com/product/5006546/amd-ryzen-9-7900x,-asus-b650e-f-rog-strix-gaming-wifi,-gskill-flare-x5-series-kit-64gb-ddr5-6000,-computer-build-bundle) \- Ryzen 9 7900X (24 PCIE lanes) + ASUS B650E-F (1x Gen5.0x16 + 1xGen4.0x16) , G.Skill Flare X5 Series Kit 64GB DDR5-6000 They all support up to 128GB DDR5 RAM. (If it changes things, whatever I get I need to pay an additional $65 transport fee for my friend who lives next to a micro center and would check the parts into their suitcase when they visit me this week, so its really $465, $615, $665) Ryzen 7 looking like the best option since apparently CPU doesnt really matter and its the cheapest The i7 combo interests me but doesn't seem to have enough PCIE lanes (8x GPU, 8x GPU, 4x M.2, 4x M.2 = 24 PCIE lanes needed). Though I'm confused why the board has 4 PCIE gen4+5 slots when the CPU probably cant support that much hardware plugged in Ryzen 9 interests me mostly for the RAM (I could just add 64 more GB of RAM down the road. In the other builds I'd have to discard my existing ram and buy a whole new set). I like the idea of having a Ryzen 9 but doesnt seem like I need it. Any help would be GREATLY appreciated since this topic is so complicated
2023-09-12T17:01:42
https://www.reddit.com/r/LocalLLaMA/comments/16gwqma/need_help_deciding_on_a_cpu_motherboard_for/
yellowcustard77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gwqma
false
null
t3_16gwqma
/r/LocalLLaMA/comments/16gwqma/need_help_deciding_on_a_cpu_motherboard_for/
false
false
self
3
null
Need for local GPU when you get cloud access?
1
In my company we want to evaluate use-cases for LLMs, for this I was assigned to develop PoCs about what are the possibilities. Our manager is looking to get us access to some Cloug-GPU-VMs, but I wondered if it would also make sense to have a laptop with a GPU, too. For me, with a local GPU I can debug and experiment faster in quick Iterations, can debug the code with breakpoints etc. But what would make sense here?
2023-09-12T16:39:20
https://www.reddit.com/r/LocalLLaMA/comments/16gw634/need_for_local_gpu_when_you_get_cloud_access/
Koliham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gw634
false
null
t3_16gw634
/r/LocalLLaMA/comments/16gw634/need_for_local_gpu_when_you_get_cloud_access/
false
false
self
1
null
Dear LocalLlama, can we please stop machine shaming? Many machines can use local models. We are not all trying to train the next LLM.
1
[removed]
2023-09-12T16:03:00
https://www.reddit.com/r/LocalLLaMA/comments/16gv8sb/dear_localllama_can_we_please_stop_machine/
jayfehr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gv8sb
false
null
t3_16gv8sb
/r/LocalLLaMA/comments/16gv8sb/dear_localllama_can_we_please_stop_machine/
false
false
self
1
null
Floneum 0.2 released: open source, local graph editor now with headless browsing, a package manager, cloud saves, and more
63
2023-09-12T15:46:13
https://v.redd.it/vw0u83j3wtnb1
ControlNational
v.redd.it
1970-01-01T00:00:00
0
{}
16gusz2
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vw0u83j3wtnb1/DASHPlaylist.mpd?a=1697125588%2CMTY4MDIyZmE3ODU0OTdiNzFlNGRiMTcyYzQxZGQ0NTVmZmViNGQ1MTA2YTdiNzg4NWQ1MWQwY2NlMDk0ZTM0ZQ%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/vw0u83j3wtnb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/vw0u83j3wtnb1/HLSPlaylist.m3u8?a=1697125588%2CYTVkZWE0Zjc0ZGRhYzM1N2NjMjA3NmMwYjUzNWZiYjc3OWZiNTRkNTk5NTJkNWRiYWM3ZjIyZWRjNGQ0NWYzOA%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/vw0u83j3wtnb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_16gusz2
/r/LocalLLaMA/comments/16gusz2/floneum_02_released_open_source_local_graph/
false
false
https://b.thumbs.redditm…09UCeuTE196U.jpg
63
{'enabled': False, 'images': [{'id': '9HMbowQ1t1Vq5xNvyN--dTDRIUnFaKCJDvWZGIMvUAY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/n5W-pvVvl-WY_4ND1hN4fz93A0311b7LF6dPqIss244.png?width=108&crop=smart&format=pjpg&auto=webp&s=849b3439ea54f16e4dc4b13413b4f32ade686071', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/n5W-pvVvl-WY_4ND1hN4fz93A0311b7LF6dPqIss244.png?width=216&crop=smart&format=pjpg&auto=webp&s=d00f2e41552958dd86485130ee50519fdd834279', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/n5W-pvVvl-WY_4ND1hN4fz93A0311b7LF6dPqIss244.png?width=320&crop=smart&format=pjpg&auto=webp&s=38f80bada0dcc1be196c39b403d1f6c1990941a6', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/n5W-pvVvl-WY_4ND1hN4fz93A0311b7LF6dPqIss244.png?width=640&crop=smart&format=pjpg&auto=webp&s=93350c4c35d6e4be26c9d0cfec07ab17f1fd9a7f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/n5W-pvVvl-WY_4ND1hN4fz93A0311b7LF6dPqIss244.png?width=960&crop=smart&format=pjpg&auto=webp&s=8fc53cab6394cc8ded657ff6f321022a650aa8df', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/n5W-pvVvl-WY_4ND1hN4fz93A0311b7LF6dPqIss244.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1430e26d5f96f1fda7f37daa6acf02f22e2b30b5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/n5W-pvVvl-WY_4ND1hN4fz93A0311b7LF6dPqIss244.png?format=pjpg&auto=webp&s=b7fe7352e9502e61d60899d6578a4051a2b8dc14', 'width': 1920}, 'variants': {}}]}
airoboros/spicyboros 2.2
75
Hi all, The new airoboros 2.2 and spicyboros 2.2 models are all uploaded to HF (some are still being quantized by the legendary bloke). * [https://huggingface.co/jondurbin/airoboros-l2-70b-2.2](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2) * [https://huggingface.co/jondurbin/airoboros-l2-13b-2.2](https://huggingface.co/jondurbin/airoboros-l2-13b-2.2) * [https://huggingface.co/jondurbin/airoboros-l2-7b-2.2](https://huggingface.co/jondurbin/airoboros-l2-7b-2.2) * [https://huggingface.co/jondurbin/spicyboros-70b-2.2](https://huggingface.co/jondurbin/spicyboros-70b-2.2) * [https://huggingface.co/jondurbin/spicyboros-c34b-2.2](https://huggingface.co/jondurbin/spicyboros-c34b-2.2) * [https://huggingface.co/jondurbin/spicyboros-13b-2.2](https://huggingface.co/jondurbin/spicyboros-13b-2.2) * [https://huggingface.co/jondurbin/spicyboros-7b-2.2](https://huggingface.co/jondurbin/spicyboros-7b-2.2) * (airoboros codellama-34 will be uploaded soon, likely tomorrow morning) **airoboros vs spicyboros?** spicyboros is the ~~uncensored~~ *actively de-censored* version. It will still likely have some refusals, and bias inherited from the base llama-2 model, but is more likely to produce "harmful" content. The de-alignment data was not super comprehensive, but seems to have worked well, if you're into that sort of thing. The de-alignment dataset includes a small amount of comedy, horror stories, llm-enhanced (nsfw) reddit stories, etc., so it will produce less PG content (when asked). The 7b model can be a bit... unhinged, so be careful with that one. I removed all de-alignment data from airoboros models, so you may find them more censored than previous versions - use the spicy version instead if you want an uncensored model. **Prompt format** The prompt format now uses newlines instead of spaces! A chat. USER: {prompt} ASSISTANT: So, system prompt, newline, USER: {prompt} (one space after colon), newline, ASSISTANT: There's plenty of training data that uses other names, so for a chat or RP you can replace USER/ASSISTANT with Tim:/Bob: or whatever. The dataset includes many alternate system prompts to ensure the response are more likely to be styled by the system prompt, but I shorted the default system prompt to just "A chat." **Dataset updates** * enhanced "awareness" (based on system prompt) [https://huggingface.co/datasets/jondurbin/airoboros-2.2#awareness](https://huggingface.co/datasets/jondurbin/airoboros-2.2#awareness) * text enhancement [https://huggingface.co/datasets/jondurbin/airoboros-2.2#editor](https://huggingface.co/datasets/jondurbin/airoboros-2.2#editor) * I regenerated (almost) all of the training data that included "Once upon a time..." because it's too cliche and boring * summarization [https://huggingface.co/datasets/jondurbin/airoboros-2.2#summarization](https://huggingface.co/datasets/jondurbin/airoboros-2.2#summarization) * I re-created RP/GTKM data without USER/ASSISTANT tokens polluting it: [https://huggingface.co/datasets/jondurbin/airoboros-2.2#roleplayconversation](https://huggingface.co/datasets/jondurbin/airoboros-2.2#roleplayconversation) Edit: no space after "ASSISTANT:" (ty u/WolframRavenwolf) Edit: checkpoints (lora adapters) here: * https://huggingface.co/jondurbin/airoboros-l2-70b-2.2-checkpoints * https://huggingface.co/jondurbin/airoboros-c34b-2.2-checkpoints * https://huggingface.co/jondurbin/spicyboros-70b-2.2-checkpoints
2023-09-12T15:17:40
https://www.reddit.com/r/LocalLLaMA/comments/16gu2x7/airoborosspicyboros_22/
JonDurbin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gu2x7
false
null
t3_16gu2x7
/r/LocalLLaMA/comments/16gu2x7/airoborosspicyboros_22/
false
false
self
75
{'enabled': False, 'images': [{'id': 'u7de09cXBlRQhZUQhrj_qWKOw87SB4MEo0rlwYokI64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/k-LhMqJbiK8WUZaGx1snj5c1DGMHUQS1FfSX-PRuyk8.jpg?width=108&crop=smart&auto=webp&s=2e8f2ef8819f9a499a65d2ba807cacad91ea6b0a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/k-LhMqJbiK8WUZaGx1snj5c1DGMHUQS1FfSX-PRuyk8.jpg?width=216&crop=smart&auto=webp&s=7df33af0dcdf988948307884013122f775537e2e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/k-LhMqJbiK8WUZaGx1snj5c1DGMHUQS1FfSX-PRuyk8.jpg?width=320&crop=smart&auto=webp&s=79fded5b52ecb398a36276ce18ae30b5e80489c4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/k-LhMqJbiK8WUZaGx1snj5c1DGMHUQS1FfSX-PRuyk8.jpg?width=640&crop=smart&auto=webp&s=df0e63738bd5e3d29659a495c0c297bfc3b1fc89', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/k-LhMqJbiK8WUZaGx1snj5c1DGMHUQS1FfSX-PRuyk8.jpg?width=960&crop=smart&auto=webp&s=7f99f5b88229ba38b69df5e67e408d64d472dbe1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/k-LhMqJbiK8WUZaGx1snj5c1DGMHUQS1FfSX-PRuyk8.jpg?width=1080&crop=smart&auto=webp&s=e20ce3c2c4eb7697c13cada6850e90dc14c01c38', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/k-LhMqJbiK8WUZaGx1snj5c1DGMHUQS1FfSX-PRuyk8.jpg?auto=webp&s=be11e877a9c9c10f0a22dee3da684034dc4315e9', 'width': 1200}, 'variants': {}}]}
What's the best use case for phi-1 (~1bn param GPT3.5)?
42
MSFT just released their Phi-1 and Phi-1.5 models which demonstrate very impressive reasoning abilities for their size, thanks to training on high-quality synthetic data. However, this approach means that these Phi models lack the extensive latent knowledge often embedded in other models. Using them with RAG seems like a natural fit requiring minimal adjustments. Moreover, with some tweaking, they might be well-suited for roleplay or fantasy scenarios. How would you envision utilizing these models? I'm interested in working to develop promising use cases, as some [were discussed here](https://www.reddit.com/r/LocalLLaMA/comments/16au3ga/im_convinced_now_that_personal_llms_are_going_to/). ​ This can be accomplished either with fine-tuning of Phi-1 or pre-training from scratch (I have been working on the latter this past week and have [seen some cool results](https://twitter.com/ocolegro/status/1700159878155600165)).
2023-09-12T14:52:04
https://www.reddit.com/r/LocalLLaMA/comments/16gtfmo/whats_the_best_use_case_for_phi1_1bn_param_gpt35/
docsoc1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gtfmo
false
null
t3_16gtfmo
/r/LocalLLaMA/comments/16gtfmo/whats_the_best_use_case_for_phi1_1bn_param_gpt35/
false
false
self
42
{'enabled': False, 'images': [{'id': 'u38mQ-Nq8c9Xe_odMINXxDi4tElFD6j9FqN81b86Y6w', 'resolutions': [{'height': 20, 'url': 'https://external-preview.redd.it/IsGM_jdSAzTVAiinm8jtQ2gWHTTs9Ihem9Ji95Rs8Tc.jpg?width=108&crop=smart&auto=webp&s=0c648d924c7a99c0534d56a0b65db5a7fd21d8ed', 'width': 108}], 'source': {'height': 26, 'url': 'https://external-preview.redd.it/IsGM_jdSAzTVAiinm8jtQ2gWHTTs9Ihem9Ji95Rs8Tc.jpg?auto=webp&s=3df8a8baa1bd1d7c740e20e96256ba73c0ff9f89', 'width': 140}, 'variants': {}}]}
Deploying Llama 2 in any cloud with Python API
1
2023-09-12T14:17:55
https://dstack.ai/examples/python-api/
cheptsov
dstack.ai
1970-01-01T00:00:00
0
{}
16gslb9
false
null
t3_16gslb9
/r/LocalLLaMA/comments/16gslb9/deploying_llama_2_in_any_cloud_with_python_api/
false
false
https://b.thumbs.redditm…cK3ztZyXjIdw.jpg
1
{'enabled': False, 'images': [{'id': 'Kk5_6cq8ODsGTK0N1L02ZBr2TKBzHScfNlQn15DzOfM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/5S1IClMnFkSSqVPks-PlJ5w8-Vl7ZN_VV0kjsLza2Zg.jpg?width=108&crop=smart&auto=webp&s=9a2d195ced901855ed37f117c3df4d3561cc243f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/5S1IClMnFkSSqVPks-PlJ5w8-Vl7ZN_VV0kjsLza2Zg.jpg?width=216&crop=smart&auto=webp&s=e3ce55c7cb48e5844593de306592a52c6cccb264', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/5S1IClMnFkSSqVPks-PlJ5w8-Vl7ZN_VV0kjsLza2Zg.jpg?width=320&crop=smart&auto=webp&s=981036be8c8e5f02dca08f8b28981573f6d5ce54', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/5S1IClMnFkSSqVPks-PlJ5w8-Vl7ZN_VV0kjsLza2Zg.jpg?width=640&crop=smart&auto=webp&s=a0ab11161cf1773c2f62628c62497a7a5aa3b15e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/5S1IClMnFkSSqVPks-PlJ5w8-Vl7ZN_VV0kjsLza2Zg.jpg?width=960&crop=smart&auto=webp&s=4efa70e8f4a499485e08ce2d27a644df98327088', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/5S1IClMnFkSSqVPks-PlJ5w8-Vl7ZN_VV0kjsLza2Zg.jpg?width=1080&crop=smart&auto=webp&s=8686a2896ff4e59e4231f2daab5d5dfa60d160f0', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/5S1IClMnFkSSqVPks-PlJ5w8-Vl7ZN_VV0kjsLza2Zg.jpg?auto=webp&s=8342fb2aae3cd1a0798143b91b38887b26d10248', 'width': 1200}, 'variants': {}}]}
How do you run Llama 2 on multi GPUs?
2
I got access to a few GPUs and wanted to try my hand at deploying a large LLM on 2 GPUs. At first I tried the 7B model on two 3060s using huggingface+accelerate code but it kept giving me an OOM error. Same thing with a 3090. So how do you run these models on multi GPUs? Edit: Maybe it is an issue with the gpu service I am using? Maybe their GPUs are not set up properly? I am using valdi.ai
2023-09-12T14:16:09
https://www.reddit.com/r/LocalLLaMA/comments/16gsjs9/how_do_you_run_llama_2_on_multi_gpus/
soham1996
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gsjs9
false
null
t3_16gsjs9
/r/LocalLLaMA/comments/16gsjs9/how_do_you_run_llama_2_on_multi_gpus/
false
false
self
2
null
So, uh, Mythomax-70b showed up yesterday
81
[https://huggingface.co/lloorree/mythomax-70b](https://huggingface.co/lloorree/mythomax-70b) Didn't see any posts about it, but I tried it and it's living up to the name. Only gripes is that it only has 4k context and that it doesn't handle nonstandard dialogue very well (though that is a lot of models.) But it seems to have extremely good coherence, world knowledge, and prompt adherence, and it's rare to find all three in the same model.
2023-09-12T14:11:24
https://www.reddit.com/r/LocalLLaMA/comments/16gsfp4/so_uh_mythomax70b_showed_up_yesterday/
tenmileswide
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gsfp4
false
null
t3_16gsfp4
/r/LocalLLaMA/comments/16gsfp4/so_uh_mythomax70b_showed_up_yesterday/
false
false
self
81
{'enabled': False, 'images': [{'id': 'fjpAVvOKyIgXJKcYe-MW2t1yiFocrEaWlxcGh_zTCJU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6nXsxc5KjOBbDN6l893u6E2qZNNA8UfXXtTWi-gh1-I.jpg?width=108&crop=smart&auto=webp&s=9e0b88d8abe0f7528ec174810bca8b5add082cfa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6nXsxc5KjOBbDN6l893u6E2qZNNA8UfXXtTWi-gh1-I.jpg?width=216&crop=smart&auto=webp&s=bae7dedd2066e4025a01f1a5f83b3132d7958ee7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6nXsxc5KjOBbDN6l893u6E2qZNNA8UfXXtTWi-gh1-I.jpg?width=320&crop=smart&auto=webp&s=b17137a72e87558521cca411242de9a383d9c551', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6nXsxc5KjOBbDN6l893u6E2qZNNA8UfXXtTWi-gh1-I.jpg?width=640&crop=smart&auto=webp&s=0f864e2614e1696410798a30c7a6d0534fd130f5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6nXsxc5KjOBbDN6l893u6E2qZNNA8UfXXtTWi-gh1-I.jpg?width=960&crop=smart&auto=webp&s=0a302ba8b68064f31d205892b131f71b271285af', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6nXsxc5KjOBbDN6l893u6E2qZNNA8UfXXtTWi-gh1-I.jpg?width=1080&crop=smart&auto=webp&s=22cacaf033b7bb10e15f5e1a71488837ee79a5ab', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6nXsxc5KjOBbDN6l893u6E2qZNNA8UfXXtTWi-gh1-I.jpg?auto=webp&s=1e5abe1147d6497284c95fe7c6d07d4fa9a86e31', 'width': 1200}, 'variants': {}}]}
What is the best model I can run on-prem that is compatible with Langchain?
0
I am trying to build a system with a chromadb and and llm and was hoping to use langchain for ease. It seems like I cannot run LLAMA 2 anytime soon since I am using LangChain. I can rent any EC2 I want, I just don't know which one and what model I should use. Accuracy/quality is the biggest thing for me. ​
2023-09-12T13:45:43
https://www.reddit.com/r/LocalLLaMA/comments/16grt87/what_is_the_best_model_i_can_run_onprem_that_is/
Suitable-Ad-8598
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16grt87
false
null
t3_16grt87
/r/LocalLLaMA/comments/16grt87/what_is_the_best_model_i_can_run_onprem_that_is/
false
false
self
0
null
Best Mid-Level Build?
5
Hey all, ​ I am doing some research and looking for some help. I want to have a mid-level build to start doing my own stable diffusion and LLM fine tuning. Not full on training as that requires a ton of resources. I was looking at the Nvidia A6000 as that has 48GB which should be enough to run some training. Is it true you should have the Xeon W processors to take advantage? Also, was looking at a server rack like this that I can keep in my basement plugged into my router. [https://www.ebay.com/itm/175626243153](https://www.ebay.com/itm/175626243153) Do you think this would be enough for my needs?
2023-09-12T13:00:20
https://www.reddit.com/r/LocalLLaMA/comments/16gqr9n/best_midlevel_build/
rbur0425
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gqr9n
false
null
t3_16gqr9n
/r/LocalLLaMA/comments/16gqr9n/best_midlevel_build/
false
false
self
5
{'enabled': False, 'images': [{'id': 'ckNus53wGcFtuPsDUYJY69RYKvTcrnvDc117pxYsJN0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Anlrq7i00THxwvlQr0pVXnTnNU9l-SE1tOugKBwUMPA.jpg?width=108&crop=smart&auto=webp&s=92817421bd0a6c5bbfd214ff6886c5bb7e65b4a5', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Anlrq7i00THxwvlQr0pVXnTnNU9l-SE1tOugKBwUMPA.jpg?width=216&crop=smart&auto=webp&s=c9953b35f70cb7fac714fc4ecc524865901a48f6', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Anlrq7i00THxwvlQr0pVXnTnNU9l-SE1tOugKBwUMPA.jpg?width=320&crop=smart&auto=webp&s=1dced3f15d4f65f9adfaa48a4fe062323f52d6fa', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/Anlrq7i00THxwvlQr0pVXnTnNU9l-SE1tOugKBwUMPA.jpg?width=640&crop=smart&auto=webp&s=dde6899d783645965c81973df100e4e4378042be', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/Anlrq7i00THxwvlQr0pVXnTnNU9l-SE1tOugKBwUMPA.jpg?width=960&crop=smart&auto=webp&s=7e39f1cd174646ec0cfa8197288cc84e6da6acfd', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/Anlrq7i00THxwvlQr0pVXnTnNU9l-SE1tOugKBwUMPA.jpg?width=1080&crop=smart&auto=webp&s=d6adc82718742b0344e49e687467e65aa4dc109a', 'width': 1080}], 'source': {'height': 1125, 'url': 'https://external-preview.redd.it/Anlrq7i00THxwvlQr0pVXnTnNU9l-SE1tOugKBwUMPA.jpg?auto=webp&s=edccc227f98d63a6e59b8c51104653940903f2fa', 'width': 1500}, 'variants': {}}]}
How can I use oobabooga with flowise?
1
I'm enabling api extension on oobabooga but can't connect to flowise
2023-09-12T12:50:45
https://www.reddit.com/r/LocalLLaMA/comments/16gqjxg/how_can_i_use_oobabooga_with_flowise/
forwatching
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gqjxg
false
null
t3_16gqjxg
/r/LocalLLaMA/comments/16gqjxg/how_can_i_use_oobabooga_with_flowise/
false
false
self
1
null
Converting Instructor Embeddings to Onnx. Issues with T5 encoder-only model.
2
Hi, I am trying to convert instructor-base to ONNX to quantize the model for faster CPU performance. The model is a T5 Encoder. I've made various attempts, all having similar results. The issue is that when I try to do this, it initializes encoder weights (which are nonexistent since this is a T5 Encoder model). And when building a graph, it tries to call decoder\_outputs, and throws the following error: ValueError: You have to specify either decoder\_input\_ids or decoder\_inputs\_embeds Again, these don't exist. I've tried using: \- fastT5.export\_and\_get\_onnx\_model \- torch.onnx.export \- txtai.pipeline.HFOnnx Has anyone else tried this? Any advice is greatly appreciated!
2023-09-12T12:50:23
https://www.reddit.com/r/LocalLLaMA/comments/16gqjnc/converting_instructor_embeddings_to_onnx_issues/
GoobleGravity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gqjnc
false
null
t3_16gqjnc
/r/LocalLLaMA/comments/16gqjnc/converting_instructor_embeddings_to_onnx_issues/
false
false
self
2
null
Exllama V2 has dropped!
285
2023-09-12T12:27:32
https://github.com/turboderp/exllamav2
a_beautiful_rhind
github.com
1970-01-01T00:00:00
0
{}
16gq2gu
false
null
t3_16gq2gu
/r/LocalLLaMA/comments/16gq2gu/exllama_v2_has_dropped/
false
false
https://b.thumbs.redditm…Bt9g-ijcrDTc.jpg
285
{'enabled': False, 'images': [{'id': 'CC1TwWCFAVWOyki4LM9sNnBqJjQOKr1C1yxTKpke4PE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pxge0nuWVu2mvf8OhCDz3mEmXIAuYy9YJAhKElo_T0c.jpg?width=108&crop=smart&auto=webp&s=738a7c687307a6a537705f836a3d8bbc7758219a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pxge0nuWVu2mvf8OhCDz3mEmXIAuYy9YJAhKElo_T0c.jpg?width=216&crop=smart&auto=webp&s=d2b87f821f36ee08022190fced407b9c7d4d2933', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pxge0nuWVu2mvf8OhCDz3mEmXIAuYy9YJAhKElo_T0c.jpg?width=320&crop=smart&auto=webp&s=5f16b647214400731750c6b66568b2fbf7cbb9b1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pxge0nuWVu2mvf8OhCDz3mEmXIAuYy9YJAhKElo_T0c.jpg?width=640&crop=smart&auto=webp&s=09c7d8641bce107b773b16a92876d36a3cc3955d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pxge0nuWVu2mvf8OhCDz3mEmXIAuYy9YJAhKElo_T0c.jpg?width=960&crop=smart&auto=webp&s=2efd8b3d752e0d0b41c0378f6ec076706e0858f6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pxge0nuWVu2mvf8OhCDz3mEmXIAuYy9YJAhKElo_T0c.jpg?width=1080&crop=smart&auto=webp&s=0e57741adb20a9e85aec8b8b3faa6099082c0a5a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pxge0nuWVu2mvf8OhCDz3mEmXIAuYy9YJAhKElo_T0c.jpg?auto=webp&s=931ed7d3398a15ec00a1ac8b30290bde775b848d', 'width': 1200}, 'variants': {}}]}
Mojo 🔥
42
If you haven’t heard, Mojo is a new programming language that combines the ease of Python with the performance of C. It’s written specifically for AI. Here is an example project that should excite us all (not mine btw): https://github.com/tairov/llama2.mojo
2023-09-12T12:24:23
https://www.reddit.com/r/LocalLLaMA/comments/16gq09y/mojo/
Tough_Performer6101
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gq09y
false
null
t3_16gq09y
/r/LocalLLaMA/comments/16gq09y/mojo/
false
false
self
42
{'enabled': False, 'images': [{'id': 'EvKKRuTWKjxzXehjCXwzOaTMLxuJtPJl3wtgviFFVpM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/G5SYLSHSDbcMNZuhVPNqtEFhGWFsfDYY7KcZZTs5nR4.jpg?width=108&crop=smart&auto=webp&s=4408cf806f4860d14c3df8f4b3e5fa58ed8358b4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/G5SYLSHSDbcMNZuhVPNqtEFhGWFsfDYY7KcZZTs5nR4.jpg?width=216&crop=smart&auto=webp&s=f88c9aadcab47ff617b032a60928feadb44c7c7d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/G5SYLSHSDbcMNZuhVPNqtEFhGWFsfDYY7KcZZTs5nR4.jpg?width=320&crop=smart&auto=webp&s=8595a34f7a74c2b40214b59c19bfc3facd273251', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/G5SYLSHSDbcMNZuhVPNqtEFhGWFsfDYY7KcZZTs5nR4.jpg?width=640&crop=smart&auto=webp&s=25d2d9decae4245db64e56ec15db079bc25afb88', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/G5SYLSHSDbcMNZuhVPNqtEFhGWFsfDYY7KcZZTs5nR4.jpg?width=960&crop=smart&auto=webp&s=c216b6ba2f26daf1300bb782f1ca2718fbf35715', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/G5SYLSHSDbcMNZuhVPNqtEFhGWFsfDYY7KcZZTs5nR4.jpg?width=1080&crop=smart&auto=webp&s=9e82bf7100529ca9683a4d5afa7d13e68ccd2500', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/G5SYLSHSDbcMNZuhVPNqtEFhGWFsfDYY7KcZZTs5nR4.jpg?auto=webp&s=83a879882ebe91aa7ba2e08562d21c0d1d6642a5', 'width': 1200}, 'variants': {}}]}
best coding llama model?
5
so far, whats the best coding companion? i can run up to 34b readily. Im looking for multi-lingual preferably for general purpose, but definitely want it to be c# capable.
2023-09-12T12:14:50
https://www.reddit.com/r/LocalLLaMA/comments/16gptaf/best_coding_llama_model/
Nekasus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gptaf
false
null
t3_16gptaf
/r/LocalLLaMA/comments/16gptaf/best_coding_llama_model/
false
false
self
5
null
Qwen models removed by their authors ?
1
[removed]
2023-09-12T12:02:14
https://www.reddit.com/r/LocalLLaMA/comments/16gpjja/qwen_models_removed_by_their_authors/
Bogdahnfr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gpjja
false
null
t3_16gpjja
/r/LocalLLaMA/comments/16gpjja/qwen_models_removed_by_their_authors/
false
false
self
1
{'enabled': False, 'images': [{'id': '4ydK36A-ivNKrrTlN5TOZ4gwbyWOvLMU6vjhwDE6ni8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PKiozV9PvpTCseDqQxJv7-s14UHUYb9eM9y8Z5uMtkQ.jpg?width=108&crop=smart&auto=webp&s=85bee086a8a4adf5b6313887258ca51e8030bdbd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PKiozV9PvpTCseDqQxJv7-s14UHUYb9eM9y8Z5uMtkQ.jpg?width=216&crop=smart&auto=webp&s=7f3c32fa6946e018ed1bded412c9b767351aec77', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PKiozV9PvpTCseDqQxJv7-s14UHUYb9eM9y8Z5uMtkQ.jpg?width=320&crop=smart&auto=webp&s=ae6c16f242252cb7c2ad3471f7ce0c051f534556', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PKiozV9PvpTCseDqQxJv7-s14UHUYb9eM9y8Z5uMtkQ.jpg?width=640&crop=smart&auto=webp&s=77f084f6d32ef2d6af135d2b4940f60ffeace0de', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PKiozV9PvpTCseDqQxJv7-s14UHUYb9eM9y8Z5uMtkQ.jpg?width=960&crop=smart&auto=webp&s=73d35be537a81a086a170974ab4bb6ba2be63654', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PKiozV9PvpTCseDqQxJv7-s14UHUYb9eM9y8Z5uMtkQ.jpg?width=1080&crop=smart&auto=webp&s=bd66814abc8f47b3154130c2bf26e8d0b176c0bb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PKiozV9PvpTCseDqQxJv7-s14UHUYb9eM9y8Z5uMtkQ.jpg?auto=webp&s=ca292edde219bc0ee23cedb4a10cec397683b907', 'width': 1200}, 'variants': {}}]}
AskCyph™ LITE: Run AI Model in Brower (Redpajama, Llama2 7B)
1
Hello LLama Enthusiasts, Have you ever wished for AI that’s truly personal and private? Introducing [AskCyph™ LITE](https://askcyph.cypherchat.app/), a lightweight AI chatbot that runs AI Models directly in your browser. The first time it takes time to download the model, the next time it is a lot faster to initialize. Currently, we support Red Pajama (Basic) and Llama 2 7b (Advanced). We created this as a way for enthusiasts of all levels to take a plunge and have an AI model running. A bit about us. We are the creator of [CypherChat](https://cypherchat.app/), a privacy-focused messaging platform that does not rely on personal information and offers End-to-End Encrypted (E2EE) and Peer-to-Peer (P2P) communication. We recognize the need for privacy. Note [AskCyph™ LITE](https://askcyph.cypherchat.app/), is decoupled from CypherChat and does not require you to sign up so it is a click away for you to try. ✅ Offline access ✅ Enhanced privacy ✅ Basic or Advanced Models ✅ Requires 4GB/8GB free RAM ✅ Relatively new Computer with integrated or external GPU ​ Our vision is about making AI accessible to everyone, ensuring security, and providing a unique experience. It's still experimental but growing. Those who are recently getting into AI with a non-technical background can also say they have their model running. We value your feedback and questions! We do want to give a shout-out and acknowledge community projects that inspired us and made innovation and AskCyph™ LITE possible. * [Hugging Face](https://huggingface.co/) * [Apache TVM](https://tvm.apache.org/) * [MLC AI - Web LLM](https://webllm.mlc.ai/) * [TOGETHER](https://together.ai/) * [Llama2](https://ai.meta.com/llama/) \#AIForAll #AskCyphLITE #CypherChat
2023-09-12T11:44:27
https://www.reddit.com/r/LocalLLaMA/comments/16gp5t6/askcyph_lite_run_ai_model_in_brower_redpajama/
cypherchat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gp5t6
false
null
t3_16gp5t6
/r/LocalLLaMA/comments/16gp5t6/askcyph_lite_run_ai_model_in_brower_redpajama/
false
false
self
1
null
LLM Recommendation: Don't sleep on Synthia!
84
I'm currently working on another in-depth LLM comparison after my previous [test of 13 models](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/) and [test of 7 more models](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/) - this time it's 20 models, so it takes a while... But I can't wait any longer because one model has proven to be so good that I just need to talk about it now! > SynthIA (Synthetic Intelligent Agent) is a LLama-2-70B model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations. > > All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia. That's from the [model cards on Hugging Face](https://huggingface.co/migtissera) (there are multiple versions as the author keeps updating it). Sounds good, so I tried it ([TheBloke/Synthia-70B-v1.2-GGUF](https://huggingface.co/TheBloke/Synthia-70B-v1.2-GGUF) Q4_0), and after using it extensively for a few days now, it's become my new favorite model. Why? Its combination of intelligence and personality (and even humor) surpassed all the other models I tried, which include Airoboros, Chronos-Hermes, Llama 2 Chat, MythoMax, Nous Hermes, Nous Puffin, and Samantha. Especially the latter has also been praised for its personality and intelligence, but Samantha is censored worse than Llama 2 Chat, and while I can get her to do NSFW roleplay, she's too moralizing and needs constant coercion, that's why I consider her too annoying to bother with (I already have my wife to argue or fight with, don't need an AI for that! ;)). Synthia has shown at least as much intelligence and personality, and she's uncensored, so she's always fun to talk to and very easy-going no matter the topic or theme. So after my previous favorites Nous Hermes and MythoMax, now it's Synthia. But the reason I'm so excited about this model is not just that it's become my latest favorite for entertainment purposes, no, today I actually tried it for work-related purposes (write shell scripts, Kubernetes and Terraform manifests, install and debug software, etc.) - and it worked much better than expected, even when compared to GPT-4 which I used to cross-reference my answers (here's just one example of [Synthia 70B v1.2 (Q4_0) vs. GPT-4](https://imgur.com/a/G24okTK)). Until now, I must admit that I had considered local LLMs just for entertainment purposes - for work, I'd simply use ChatGPT or GPT-4. But the intelligence Synthia exhibited in chat and roleplay made me curious, so I tried it for work, and now I start to see the potential. Anyway, I've not seen this model mentioned a lot - in fact, searching for it here, there was only one mention of it so far. I needed to post this to change that because I've tested so many models and this one has truly surprised me very positively. I'll post the detailed evaluation results of the other models once I'm done with all the tests, but for now, I had to post this because of my sincere excitement right now. **TL;DR:** Try **Synthia** for chat, roleplay, and even work! By the way, there's a newer [v1.2b](https://huggingface.co/migtissera/Synthia-70B-v1.2b) that still needs quantization by u/The-Bloke. And there are smaller 13B and even 7B versions, which I haven't tested extensively so can't speak of their quality, but if 70B is too big or too slow for you, I recommend you give those a try.
2023-09-12T11:15:30
https://www.reddit.com/r/LocalLLaMA/comments/16gokoa/llm_recommendation_dont_sleep_on_synthia/
WolframRavenwolf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gokoa
false
null
t3_16gokoa
/r/LocalLLaMA/comments/16gokoa/llm_recommendation_dont_sleep_on_synthia/
false
false
self
84
{'enabled': False, 'images': [{'id': 'C5XErO93I36-NbN-scAGZWB9fcwUctaccaqpnQtxC8g', 'resolutions': [{'height': 131, 'url': 'https://external-preview.redd.it/WdqJYPDT1O2AxCe7OTgiv-F1QFNXtDggQpXVTCxylm4.jpg?width=108&crop=smart&auto=webp&s=088d2af408fab95b88cdc33d9127d2c0c7617beb', 'width': 108}, {'height': 262, 'url': 'https://external-preview.redd.it/WdqJYPDT1O2AxCe7OTgiv-F1QFNXtDggQpXVTCxylm4.jpg?width=216&crop=smart&auto=webp&s=bd744c011a1d6455369981bfb2ac7ebe00153991', 'width': 216}, {'height': 388, 'url': 'https://external-preview.redd.it/WdqJYPDT1O2AxCe7OTgiv-F1QFNXtDggQpXVTCxylm4.jpg?width=320&crop=smart&auto=webp&s=9cd77afe4ab2ffb5a22740f37a69559dea46b16f', 'width': 320}, {'height': 777, 'url': 'https://external-preview.redd.it/WdqJYPDT1O2AxCe7OTgiv-F1QFNXtDggQpXVTCxylm4.jpg?width=640&crop=smart&auto=webp&s=53f6820edc19ac7aaf15a2ff9110051b35d9e57b', 'width': 640}], 'source': {'height': 954, 'url': 'https://external-preview.redd.it/WdqJYPDT1O2AxCe7OTgiv-F1QFNXtDggQpXVTCxylm4.jpg?auto=webp&s=53393b2b4dad1afc6420cf2ee5294173787c4a20', 'width': 785}, 'variants': {}}]}
How fast will quantized Falcon-180B run on system RAM?
1
Has anyone tried it yet? Could you tell me how many tokens per second or seconds per token u get? ​
2023-09-12T11:12:28
https://www.reddit.com/r/LocalLLaMA/comments/16goink/how_fast_will_quantized_falcon180b_run_on_system/
Prince-of-Privacy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16goink
false
null
t3_16goink
/r/LocalLLaMA/comments/16goink/how_fast_will_quantized_falcon180b_run_on_system/
false
false
self
1
null
Llama2 7b fine tuning on Sentiment analysis
5
Hello, I am trying to finetune llama 7b on some data extraction from the user input. I did some finetuning for summarisation and it works fine. Now I am trying to extract some info about keyword such as sentiments and the output would be a Sentiment and line describe the sentiment exactly as it is. The format would be like this "###Input: Some user input###Keyword: SPA ###Output: Topic(name[KEYWORD], sentiment[SENTIMENT], line[line describing that sentiment]) <|end|>" But after finetuning 3000 steps, I see the loss decreasing but when tested It output lines which are not in the User input. What could be the reason, is it the 7B model limitation or I need to train more steps.
2023-09-12T10:07:49
https://www.reddit.com/r/LocalLLaMA/comments/16gncpp/llama2_7b_fine_tuning_on_sentiment_analysis/
Intelligent-Fan-2461
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gncpp
false
null
t3_16gncpp
/r/LocalLLaMA/comments/16gncpp/llama2_7b_fine_tuning_on_sentiment_analysis/
false
false
self
5
null
Finetuning a model for world lore knowledge
10
My brother and I are working on a project aimed at "teaching" base llama2 7b on a broad unstructured dataset of Skyrim lore to create a base model for a Skyrim NPC that is grounded in the world. Step one is to compile a few text files of scraped information from the wiki and train a LoRA in oobabooga via colab. We've made good progress, and have the datasets available, but we're struggling to work out which parameters we should be modifying in order for the model to really learn this knowledge without overfitting. It quickly follows the style and tone of the data, however the information it spits out when prompted with a specific piece of data (e.g. a location) is always incorrect (even feeding half a sample exactly, it'll continue with believable sounding information, however it doesnt align with the training data). Worth also mentioning that we're aiming to then finetune for conversation on top of this model after we're happy with the first step. We'd be super curious to get some ideas on what parameters would be most effective for this task. E.g., should we be aiming for 5+ epochs, or aiming to keep it low? <512 rank or more? (I've heard people suggest both, so I'm a little stumped) What kind of batch size is recommended? It's important the model learns the information in the data for conversation later. The idea being that You can ask a character what Markarth is, and they will have enough information available to describe the city accurately. We're new to fine-tuning, so any pointers or help would be appreciated!
2023-09-12T09:54:50
https://www.reddit.com/r/LocalLLaMA/comments/16gn4h7/finetuning_a_model_for_world_lore_knowledge/
Goatman117
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gn4h7
false
null
t3_16gn4h7
/r/LocalLLaMA/comments/16gn4h7/finetuning_a_model_for_world_lore_knowledge/
false
false
self
10
null
eGPU to increase VRAM capacity
11
I've been exploring locally run LLMs recently (as a completely non-technical novice) and I'm looking for ways to expand VRAM capacity to load larger models without the need to substantially reconfigure my existing set up (4090 + 7950x3d + 64gb RAM). I'm considering getting an eGPU to potentially run a 4080 or another 4090 in parallel with my main system, but had a few queries before diving into it, as I can't see that many other people have done this (perhaps for good reason?): * Has anyone had any experience doing this? How smooth was the experience? * Would adding a 2nd GPU in this manner effectively combine the VRAM or are there limitations I should be aware of? I'm trying to avoid rebuilding my entire PC since I also use it for other non-AI tasks, so finding a way to just expand my VRAM would be convenient.
2023-09-12T09:47:51
https://www.reddit.com/r/LocalLLaMA/comments/16gn0ij/egpu_to_increase_vram_capacity/
TheCunningBee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gn0ij
false
null
t3_16gn0ij
/r/LocalLLaMA/comments/16gn0ij/egpu_to_increase_vram_capacity/
false
false
self
11
null
15 comparisons with 3-bit, 4-bit, 5-bit, 6-bit and 8-bit to test how quantisation affects model output
98
[https://rentry.org/quants](https://rentry.org/quants) I did 15 basic comparisons with GGML 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit and GPTQ 4-bit to test how quants change responses. Model is Vicuna 33b, and no coding questions cause its bad with coding The tests aren't super thorough but it helped me settle on a quant, Q8 is good but slow and Q5\_K\_M is what I'd use. I dunno why but Q6\_K did badly. I used TGI's debug-deterministic for greedy decoding so any change in the output is from quant differences
2023-09-12T09:13:18
https://www.reddit.com/r/LocalLLaMA/comments/16gmfwd/15_comparisons_with_3bit_4bit_5bit_6bit_and_8bit/
GrapeCharacter2746
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gmfwd
false
null
t3_16gmfwd
/r/LocalLLaMA/comments/16gmfwd/15_comparisons_with_3bit_4bit_5bit_6bit_and_8bit/
false
false
self
98
{'enabled': False, 'images': [{'id': 'b1E8sI-kTet-3YOFKrYAUVQ9ABbay60W7WEBpTM34S8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=108&crop=smart&auto=webp&s=5f7d74321748816977c2c47d74607125fd510a17', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=216&crop=smart&auto=webp&s=9c08000e015b470c7d577334237c7dee99c37847', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=320&crop=smart&auto=webp&s=628b4e1ef982e336b9ee2da5dbacecc2774b6d65', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?auto=webp&s=9e4cec75ef4248064a481db4ef5f29637aec6e67', 'width': 512}, 'variants': {}}]}
License status of semantic search models with respect to licenses for training datasets.
1
Ok, so most of semantic search models from this leaderboard: [https://huggingface.co/spaces/mteb/leaderboard](https://huggingface.co/spaces/mteb/leaderboard) or this leaderboard: [https://paperswithcode.com/sota/zero-shot-text-search-on-beir](https://paperswithcode.com/sota/zero-shot-text-search-on-beir) are trained on data that are only licensed for noncommercial use - such as MS MARCO. [https://microsoft.github.io/msmarco/](https://microsoft.github.io/msmarco/) Meanwhile models are often license on MIT, Apache or other similar free license. Does the license for the data affect the model license? Are there any cases or legal studies dealing with issue? **Can we use commercially licensed model trained on noncommercially licensed data for commercial purposes without legal risks?**
2023-09-12T08:34:59
https://www.reddit.com/r/LocalLLaMA/comments/16glt6k/license_status_of_semantic_search_models_with/
FormerIYI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16glt6k
false
null
t3_16glt6k
/r/LocalLLaMA/comments/16glt6k/license_status_of_semantic_search_models_with/
false
false
self
1
{'enabled': False, 'images': [{'id': 'CCVJBt0-kH9o-QPgo7qiP6d0ggaejrGSkWh3JVtDcDI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/iB9eNi3rxIJpvd7zLjet2NAGGdmv8p6wAhdFyU5XkOE.jpg?width=108&crop=smart&auto=webp&s=849a3fa499b884c2b8710447c5fa7e81ea1d62dd', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/iB9eNi3rxIJpvd7zLjet2NAGGdmv8p6wAhdFyU5XkOE.jpg?width=216&crop=smart&auto=webp&s=065a3d9f73e3d30917818732d3b52bd258701450', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/iB9eNi3rxIJpvd7zLjet2NAGGdmv8p6wAhdFyU5XkOE.jpg?width=320&crop=smart&auto=webp&s=f84b8e1967e9366f08b71df8df7d28872547b0f9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/iB9eNi3rxIJpvd7zLjet2NAGGdmv8p6wAhdFyU5XkOE.jpg?width=640&crop=smart&auto=webp&s=c516ab5717dc5f8c117a7c3f6c737b5b76da054b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/iB9eNi3rxIJpvd7zLjet2NAGGdmv8p6wAhdFyU5XkOE.jpg?width=960&crop=smart&auto=webp&s=bfc377360bf629e87d8f599bdab20402ab507979', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/iB9eNi3rxIJpvd7zLjet2NAGGdmv8p6wAhdFyU5XkOE.jpg?width=1080&crop=smart&auto=webp&s=6275c2854aeb552e46a627255c27691406e6de3b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/iB9eNi3rxIJpvd7zLjet2NAGGdmv8p6wAhdFyU5XkOE.jpg?auto=webp&s=306c3d89e946b9aa6841c52fedf2738bc28f2689', 'width': 1200}, 'variants': {}}]}
WizardCoder python 34 q8 results
7
2023-09-12T08:19:08
https://v.redd.it/nrpiniphasnb1
Nondzu
/r/LocalLLaMA/comments/16gljzw/wizardcoder_python_34_q8_results/
1970-01-01T00:00:00
0
{}
16gljzw
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/nrpiniphasnb1/DASHPlaylist.mpd?a=1697185151%2CYzg4MGI3NGFmNDIzYTIwZjlmNzFjZjc4ZWQ5MGEwMDFjNmYwYjgyNTMwYWRmM2U5NjY2ZTc5NTRlMWZlNzJmMQ%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/nrpiniphasnb1/DASH_1080.mp4?source=fallback', 'height': 1920, 'hls_url': 'https://v.redd.it/nrpiniphasnb1/HLSPlaylist.m3u8?a=1697185151%2CMTlmZjljM2JiM2I2MTcwYWI0NTMxOGEyYjg1YmRlMTk0MjAzMjAyNzFjYjJkMWUwMGM1NGIwYjBhYzYzY2QwOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/nrpiniphasnb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}}
t3_16gljzw
/r/LocalLLaMA/comments/16gljzw/wizardcoder_python_34_q8_results/
false
false
https://external-preview…c9aeeaa523703eb9
7
{'enabled': False, 'images': [{'id': 'dGVleDZla2hhc25iMXClJ8BFqJrL2EJPWHRD0PV0-k53p-iNkQs1rOUCLQxH', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/dGVleDZla2hhc25iMXClJ8BFqJrL2EJPWHRD0PV0-k53p-iNkQs1rOUCLQxH.png?width=108&crop=smart&format=pjpg&auto=webp&s=e0c16cf2edf3fc6aabc64e8330431e189428d7a0', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/dGVleDZla2hhc25iMXClJ8BFqJrL2EJPWHRD0PV0-k53p-iNkQs1rOUCLQxH.png?width=216&crop=smart&format=pjpg&auto=webp&s=e05413992f0d1ea7fe8a655ac6f67551861f2c4e', 'width': 216}, {'height': 569, 'url': 'https://external-preview.redd.it/dGVleDZla2hhc25iMXClJ8BFqJrL2EJPWHRD0PV0-k53p-iNkQs1rOUCLQxH.png?width=320&crop=smart&format=pjpg&auto=webp&s=c05044f1b376b948893d4b6c3c9bf3828c3076ec', 'width': 320}, {'height': 1138, 'url': 'https://external-preview.redd.it/dGVleDZla2hhc25iMXClJ8BFqJrL2EJPWHRD0PV0-k53p-iNkQs1rOUCLQxH.png?width=640&crop=smart&format=pjpg&auto=webp&s=2a52e98691070acf3b779ddf1cfe2cd8ef30ede2', 'width': 640}, {'height': 1708, 'url': 'https://external-preview.redd.it/dGVleDZla2hhc25iMXClJ8BFqJrL2EJPWHRD0PV0-k53p-iNkQs1rOUCLQxH.png?width=960&crop=smart&format=pjpg&auto=webp&s=3d94bacdcacc37030b62311538031f4692b98a59', 'width': 960}], 'source': {'height': 1790, 'url': 'https://external-preview.redd.it/dGVleDZla2hhc25iMXClJ8BFqJrL2EJPWHRD0PV0-k53p-iNkQs1rOUCLQxH.png?format=pjpg&auto=webp&s=52273fa7d393ded2c5b9dd854761d46a0a2693f7', 'width': 1006}, 'variants': {}}]}
WizardCoder python 34 q8 results
1
Just want to share my 0 professional result with 3x3090 and wizard coder For me this model with llama.cpp is really good for coding and it write 200 lines script without problem. I don't use any humaneval etc benchmark for test it. I just work with it like usually I work with chatgpt. IMAO the wizard is writing more modern code and now more trics. Tldr: WizardCoder generate useful code for me. It's my winner 🏆 Can't wait for 70b version So
2023-09-12T08:15:33
https://v.redd.it/x5bvcp5v9snb1
Nondzu
/r/LocalLLaMA/comments/16glhup/wizardcoder_python_34_q8_results/
1970-01-01T00:00:00
0
{}
16glhup
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/x5bvcp5v9snb1/DASHPlaylist.mpd?a=1697184935%2CN2Q3ODIzZDE5Y2VhNmZlZjk3ZjU1OTcyOWFlOWU3NTFmMzUxMGI5MTc1ZmFkYmMyZWIxOGRhZGE5YWVlYTM1Ng%3D%3D&v=1&f=sd', 'duration': 17, 'fallback_url': 'https://v.redd.it/x5bvcp5v9snb1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/x5bvcp5v9snb1/HLSPlaylist.m3u8?a=1697184935%2CNDQ3YzMzZTllNGI3ZWJkNTFhYjY5OWJlMTgzMDZiNDYwYjgzOTFkZjMzZGJkMTkxYmIyNWRjZTUzOGU4ZmU3Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/x5bvcp5v9snb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_16glhup
/r/LocalLLaMA/comments/16glhup/wizardcoder_python_34_q8_results/
false
false
https://external-preview…b219556206481574
1
{'enabled': False, 'images': [{'id': 'bjh3bHY2MXY5c25iMQofC0sK07iJQ9mdY6w694dw5FJz9NQXGUiCxNnPTek5', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bjh3bHY2MXY5c25iMQofC0sK07iJQ9mdY6w694dw5FJz9NQXGUiCxNnPTek5.png?width=108&crop=smart&format=pjpg&auto=webp&s=c6e213e24e3c43edc7b3ac477c0bc1988fb7922f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bjh3bHY2MXY5c25iMQofC0sK07iJQ9mdY6w694dw5FJz9NQXGUiCxNnPTek5.png?width=216&crop=smart&format=pjpg&auto=webp&s=66d123302567f674e6b8a97aecf93b6237399bb2', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/bjh3bHY2MXY5c25iMQofC0sK07iJQ9mdY6w694dw5FJz9NQXGUiCxNnPTek5.png?width=320&crop=smart&format=pjpg&auto=webp&s=6573c6fe79e20120b5c2222a3d3f128da934f7df', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/bjh3bHY2MXY5c25iMQofC0sK07iJQ9mdY6w694dw5FJz9NQXGUiCxNnPTek5.png?width=640&crop=smart&format=pjpg&auto=webp&s=a9f566be49ba0e724516ae243d17ee0f907d83cc', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/bjh3bHY2MXY5c25iMQofC0sK07iJQ9mdY6w694dw5FJz9NQXGUiCxNnPTek5.png?width=960&crop=smart&format=pjpg&auto=webp&s=82191f946d2aa79866600fbe4c3f735bf09b5f2a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bjh3bHY2MXY5c25iMQofC0sK07iJQ9mdY6w694dw5FJz9NQXGUiCxNnPTek5.png?width=1080&crop=smart&format=pjpg&auto=webp&s=300857bda79a0d6d77ec17c4b0a32a973b16242d', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/bjh3bHY2MXY5c25iMQofC0sK07iJQ9mdY6w694dw5FJz9NQXGUiCxNnPTek5.png?format=pjpg&auto=webp&s=ada50d06e1d4eaac3c48ad64b4c9e850fece0ef5', 'width': 1080}, 'variants': {}}]}
What is the most novel ways you used llms?
24
Title. What is the most creative or novel ways you used llm? &#x200B; I know the common usecases: \- Text distallation e.g. Summurization and Information Extraction \- Text transformation e.g. translation and rewriting \- Text expansion e.g. Brainstorming, generating new content \- Chatbot QA and RP \- Self Prompting, like AutoGPT &#x200B; &#x200B; For what else is it used out there?
2023-09-12T07:20:27
https://www.reddit.com/r/LocalLLaMA/comments/16gkkoc/what_is_the_most_novel_ways_you_used_llms/
Astronos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gkkoc
false
null
t3_16gkkoc
/r/LocalLLaMA/comments/16gkkoc/what_is_the_most_novel_ways_you_used_llms/
false
false
self
24
null
Best data format for passing data to llam 2 using llama index
0
I have html pages that contain text, images, hyperlinks and tables. What would be the best way to send it to llama 2 using llama index, so that the model understands the data properly.
2023-09-12T06:25:08
https://www.reddit.com/r/LocalLLaMA/comments/16gjnxm/best_data_format_for_passing_data_to_llam_2_using/
zaid-70
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gjnxm
false
null
t3_16gjnxm
/r/LocalLLaMA/comments/16gjnxm/best_data_format_for_passing_data_to_llam_2_using/
false
false
self
0
null
Are any of the models that equal/beat GPT 3.5 price competitive right now? (cloud)
6
Saw a few posts about models like Llama 2 70b beating GPT 3.5 but being far more expensive to run due to ChatGPT being subsidized by Microsoft. Wondering if that's still true or if there's any model we can run online that is price competitive in terms of cost per token? What about locally if we want to need to use a 10M-1B tokens over a few months? (Edit: In terms of performance, I've found GPT 3.5 4K context works pretty well for my needs, but gpt4 is far better but not worth the extra cost.)
2023-09-12T05:57:57
https://www.reddit.com/r/LocalLLaMA/comments/16gj7cl/are_any_of_the_models_that_equalbeat_gpt_35_price/
Ill_Fox8807
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gj7cl
false
null
t3_16gj7cl
/r/LocalLLaMA/comments/16gj7cl/are_any_of_the_models_that_equalbeat_gpt_35_price/
false
false
self
6
null
Any local model good with instructions and/or function calling?
1
Could someone pls help me out? :)
2023-09-12T05:30:28
https://www.reddit.com/r/LocalLLaMA/comments/16giqp9/any_local_model_good_with_instructions_andor/
schmedu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16giqp9
false
null
t3_16giqp9
/r/LocalLLaMA/comments/16giqp9/any_local_model_good_with_instructions_andor/
false
false
self
1
null
How to resume fine tuning with the autotrain-advanced utils?
1
I had a fine tuning session earlier today that completed and wish to restart to pick up where it left off and continue to train. But judging from the loss rate it's starting all over again. How do I force autotrain to resume a fine tuning?
2023-09-12T05:27:59
https://www.reddit.com/r/LocalLLaMA/comments/16gip5f/how_to_resume_fine_tuning_with_the/
SiliconObsessed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gip5f
false
null
t3_16gip5f
/r/LocalLLaMA/comments/16gip5f/how_to_resume_fine_tuning_with_the/
false
false
self
1
null
Why Do the LLaMA Models Need So Many Parameters for Good Results When Stable Diffusion Needs So Few?
64
Forgive me if this seems like an ignorant question, since I am myself quite ignorant to machine learning in general, but I found something strange. The Stable Diffusion models can generally run quite well on a 8 GB VRAM GPU and can generate really good looking results. However, it seems that many of the LLaMA language models and their fine tunes that can run on a 8 GB VRAM GPU might not generate the best results. Maybe this is just ignorance on my part, but I wonder why a smaller model trained on images, which I think are really dense representations of information, tend to generate visually appealing results and why language models, in which words might not encode as much information as the images, tend to require more parameters to train. Again, I apologize for potentially asking a stupid questions, and maybe this question didn’t even make sense, but any leads might be quite nice. Thank you for your time.
2023-09-12T04:39:31
https://www.reddit.com/r/LocalLLaMA/comments/16ghtw6/why_do_the_llama_models_need_so_many_parameters/
AlterandPhil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ghtw6
false
null
t3_16ghtw6
/r/LocalLLaMA/comments/16ghtw6/why_do_the_llama_models_need_so_many_parameters/
false
false
self
64
null
Microsoft research releases phil-1.5 a 1.3B model trained on 7B tokens. 55.5% at human eval
1
[removed]
2023-09-12T04:26:00
https://i.redd.it/qcljm2gy4rnb1.jpg
m477h13U
i.redd.it
1970-01-01T00:00:00
0
{}
16ghkt8
false
null
t3_16ghkt8
/r/LocalLLaMA/comments/16ghkt8/microsoft_research_releases_phil15_a_13b_model/
false
false
https://b.thumbs.redditm…0p3YsHVEDFqw.jpg
1
{'enabled': True, 'images': [{'id': 'ZL7GGrNoPu-5NFoGZmntbBykSgAPx9P_7bA--nqhaYI', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/qcljm2gy4rnb1.jpg?width=108&crop=smart&auto=webp&s=e5f892229bb55ce4713a537be2269aa3cec8c435', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/qcljm2gy4rnb1.jpg?width=216&crop=smart&auto=webp&s=61698b710365b22ea617e89fd6fdf881651152a5', 'width': 216}, {'height': 147, 'url': 'https://preview.redd.it/qcljm2gy4rnb1.jpg?width=320&crop=smart&auto=webp&s=d5289124b00f112b3fe0c67bb3df334c8d1a2e80', 'width': 320}, {'height': 295, 'url': 'https://preview.redd.it/qcljm2gy4rnb1.jpg?width=640&crop=smart&auto=webp&s=f113e7b3a8c07aa36795b3542c7bb52757363d7f', 'width': 640}, {'height': 443, 'url': 'https://preview.redd.it/qcljm2gy4rnb1.jpg?width=960&crop=smart&auto=webp&s=c36515a54054c4d0d02ff6cb4e7e73eb83f68390', 'width': 960}, {'height': 498, 'url': 'https://preview.redd.it/qcljm2gy4rnb1.jpg?width=1080&crop=smart&auto=webp&s=bb771a5a029c48f757a5c10bb50e081bfc91758f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/qcljm2gy4rnb1.jpg?auto=webp&s=2d02cca87826afc584d96411d83173212f48c142', 'width': 2340}, 'variants': {}}]}
Super basic questions about H2OGPT, Models, etc. from a noob
1
[removed]
2023-09-12T04:04:09
https://www.reddit.com/r/LocalLLaMA/comments/16gh5tb/super_basic_questions_about_h2ogpt_models_etc/
consig1iere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gh5tb
false
null
t3_16gh5tb
/r/LocalLLaMA/comments/16gh5tb/super_basic_questions_about_h2ogpt_models_etc/
false
false
self
1
null
Phi-1.5: 41.4% HumanEval in 1.3B parameters (model download link in comments)
114
2023-09-12T03:57:44
https://arxiv.org/abs/2309.05463
ethanhs
arxiv.org
1970-01-01T00:00:00
0
{}
16gh0yv
false
null
t3_16gh0yv
/r/LocalLLaMA/comments/16gh0yv/phi15_414_humaneval_in_13b_parameters_model/
false
false
https://b.thumbs.redditm…mJwVVvVekIeA.jpg
114
{'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]}
Code llama, is it good for pure C programming?
6
I understand pure C is indeed supported, but I wonder if it is well supported. In particular, could it beat github copilot? From the review I gathered, it seems to focus more on C++, python and Javascript. I wonder if it is only because there aren't as many pure C programmers these days so I don't get to see their reviews.
2023-09-12T02:24:47
https://www.reddit.com/r/LocalLLaMA/comments/16gf37h/code_llama_is_it_good_for_pure_c_programming/
Studying_Man
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gf37h
false
null
t3_16gf37h
/r/LocalLLaMA/comments/16gf37h/code_llama_is_it_good_for_pure_c_programming/
false
false
self
6
null
Is Nvidia finally going to have some competition in the generative AI space?
30
2023-09-12T01:53:16
https://www.d-matrix.ai/
onil_gova
d-matrix.ai
1970-01-01T00:00:00
0
{}
16gee9d
false
null
t3_16gee9d
/r/LocalLLaMA/comments/16gee9d/is_nvidia_finally_going_to_have_some_competition/
false
false
default
30
{'enabled': False, 'images': [{'id': 'byHvkMdcHf3fJZlz_BmfbEqKYnjwR3ijDTngact6bU0', 'resolutions': [{'height': 36, 'url': 'https://external-preview.redd.it/XD6Om61idqouM-X79i7uYAMmlCOh-jifSBDc5yexKW8.jpg?width=108&crop=smart&auto=webp&s=28620868d2949ffcc08cbd4a6d0fbe786a2a4bc2', 'width': 108}, {'height': 72, 'url': 'https://external-preview.redd.it/XD6Om61idqouM-X79i7uYAMmlCOh-jifSBDc5yexKW8.jpg?width=216&crop=smart&auto=webp&s=3ad36c0c70fbb14a300af20a3c0be33b80888c3b', 'width': 216}, {'height': 106, 'url': 'https://external-preview.redd.it/XD6Om61idqouM-X79i7uYAMmlCOh-jifSBDc5yexKW8.jpg?width=320&crop=smart&auto=webp&s=9f509b041baff0f841fdc7262560cef6279b84d7', 'width': 320}, {'height': 213, 'url': 'https://external-preview.redd.it/XD6Om61idqouM-X79i7uYAMmlCOh-jifSBDc5yexKW8.jpg?width=640&crop=smart&auto=webp&s=e2dc6cba0367b52def01c763af7cde1284f059c2', 'width': 640}, {'height': 320, 'url': 'https://external-preview.redd.it/XD6Om61idqouM-X79i7uYAMmlCOh-jifSBDc5yexKW8.jpg?width=960&crop=smart&auto=webp&s=a525b53764198c01013a653fb6d6b4a8314d40b5', 'width': 960}, {'height': 360, 'url': 'https://external-preview.redd.it/XD6Om61idqouM-X79i7uYAMmlCOh-jifSBDc5yexKW8.jpg?width=1080&crop=smart&auto=webp&s=ef6a2d4de0c566ad737f7363fcb99d0184e1cc4c', 'width': 1080}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/XD6Om61idqouM-X79i7uYAMmlCOh-jifSBDc5yexKW8.jpg?auto=webp&s=91ad379a1e892718387f5f55aa3f621124583599', 'width': 1500}, 'variants': {}}]}
Can fine tuning remove censorship/alignment from ChatGPT models?
1
[removed]
2023-09-12T01:39:51
https://www.reddit.com/r/LocalLLaMA/comments/16ge3l1/can_fine_tuning_remove_censorshipalignment_from/
NickDifuze
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ge3l1
false
null
t3_16ge3l1
/r/LocalLLaMA/comments/16ge3l1/can_fine_tuning_remove_censorshipalignment_from/
false
false
self
1
null
Is it worth it to return my 3080 and get a used 3090?
15
I'm a first year cs major and I know for certain that I want to get involved with AI. I watched some videos on neural networks (including the 3blue1brown series on them) and I got hooked on the way that pure math and computer code is used to transform textual inputs into recognizable outputs in a way that mimics "real" intelligence. I recently built a new gaming PC with a 3080 10gb, but now I'm not sure if it will be enough to experiment with existing models or even train my own. I could afford the $600-700 for a used 3090 with its 24gb of vram, but only if I know that it will actually provide me with significantly more value than my 3080. Should I return my 3080 and get the 3090? TLDR: Want to learn everything I can about neural networks, but only have a 3080 with 10gb of vram. Is it worth it to upgrade to a 3090 if my goal is to experiment and learn about AI?
2023-09-12T00:57:41
https://www.reddit.com/r/LocalLLaMA/comments/16gd53d/is_it_worth_it_to_return_my_3080_and_get_a_used/
InteractionQuiet9169
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gd53d
false
null
t3_16gd53d
/r/LocalLLaMA/comments/16gd53d/is_it_worth_it_to_return_my_3080_and_get_a_used/
false
false
self
15
null
Does a guanaco 65b gguf exist?
1
[removed]
2023-09-11T23:47:20
https://www.reddit.com/r/LocalLLaMA/comments/16gbjrt/does_a_guanaco_65b_gguf_exist/
wh33t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gbjrt
false
null
t3_16gbjrt
/r/LocalLLaMA/comments/16gbjrt/does_a_guanaco_65b_gguf_exist/
false
false
self
1
null
ggPrompt: explore questions and topics generated and structured by AI
1
Hey everyone, I've made a new site called ggPrompt.org. [From Philosophy to Pythagorean Metempsychosis](https://preview.redd.it/38s28vzlmpnb1.png?width=1928&format=png&auto=webp&s=d685d010b98c448c8fd1734bf959336305f94481) It helps you come up with questions and explore new scientific topics quickly (similar to those wikipedia rabbit holes). [https://ggprompt.org/](https://ggprompt.org/) It's still in an early beta so I'd really appreciate any feedback or thoughts you have. Tags and structure are generated by GPT4 while some of the prompts are a mix of GPT4 and OpenSource models. I have many ideas, so the site will continue to grow in both quality and quantity. Happy digging!
2023-09-11T23:28:50
https://www.reddit.com/r/LocalLLaMA/comments/16gb490/ggprompt_explore_questions_and_topics_generated/
vmirnv
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16gb490
false
null
t3_16gb490
/r/LocalLLaMA/comments/16gb490/ggprompt_explore_questions_and_topics_generated/
false
false
https://b.thumbs.redditm…6FXp7qhOnS8U.jpg
1
null
NVidia vGPU on esx
5
Was having a browse today and somehow slipped down a rabbit hole. I wondered if anyone has tried a virtualized esxi instance with nVidia vGPUs? It seems that esxi is capable, with the right drivers and support, of merging multiple cards into a single resource. Obviously there will be some overhead, but with a good native driver this could be minimised. https://docs.nvidia.com/grid/latest/grid-software-quick-start-guide/ Only applies to select GPUs I realise,.bit could potentially allow for larger single device vram sizes ( along with increased tensors being spread over multiple cards ). Would be interested if anyone has had any experience?
2023-09-11T22:42:23
https://www.reddit.com/r/LocalLLaMA/comments/16g9y94/nvidia_vgpu_on_esx/
BreakIt-Boris
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16g9y94
false
null
t3_16g9y94
/r/LocalLLaMA/comments/16g9y94/nvidia_vgpu_on_esx/
false
false
self
5
null
How to use multiple GPU's with textgen webui?
1
HI, total noob here, I have a machine which has two nvidia GPU's with 22 GB of VRAM each, which should in total be enough to load a 13b param model, but textgen webui only uses GPU 0 without trying to use GPU 1, so I run into this issue: &#x200B; https://preview.redd.it/8boh4yrgvonb1.png?width=1169&format=png&auto=webp&s=946895709bd5af60b7934a36bd837f7c2cd0a65e How can I make textgen webui run using both GPUs? Thanks
2023-09-11T20:49:38
https://www.reddit.com/r/LocalLLaMA/comments/16g6w5b/how_to_use_multiple_gpus_with_textgen_webui/
Milk_No_Titties
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16g6w5b
false
null
t3_16g6w5b
/r/LocalLLaMA/comments/16g6w5b/how_to_use_multiple_gpus_with_textgen_webui/
false
false
https://a.thumbs.redditm…4XiUEBN8Z9w0.jpg
1
null
Issue with RAG implementations (h2ogpt, localgpt, etc)
1
[removed]
2023-09-11T20:21:30
https://www.reddit.com/r/LocalLLaMA/comments/16g644d/issue_with_rag_implementations_h2ogpt_localgpt_etc/
xIndirect
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16g644d
false
null
t3_16g644d
/r/LocalLLaMA/comments/16g644d/issue_with_rag_implementations_h2ogpt_localgpt_etc/
false
false
self
1
null
different VRAM in cards
2
I have a 4090 and a 6000 Ada with a total of 72Gb VRAM (24Gb + 48Gb) Can I sum the VRAM of the two cards for training and inferencing or it will be limited to 24 + 24 ?
2023-09-11T20:13:47
https://www.reddit.com/r/LocalLLaMA/comments/16g5w9m/different_vram_in_cards/
Dry_Honeydew9842
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16g5w9m
false
null
t3_16g5w9m
/r/LocalLLaMA/comments/16g5w9m/different_vram_in_cards/
false
false
self
2
null
GPT user here - what’s the benefit of using these localized models?
49
Are there specific things?
2023-09-11T19:19:17
https://www.reddit.com/r/LocalLLaMA/comments/16g4f5l/gpt_user_here_whats_the_benefit_of_using_these/
livekop
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16g4f5l
false
null
t3_16g4f5l
/r/LocalLLaMA/comments/16g4f5l/gpt_user_here_whats_the_benefit_of_using_these/
false
false
self
49
null
What local client can I use to load a local Llama 2 70B model and then send prompts to it via a python script and get a return as a string?
3
I use oobabooga but I need a client that will let me make my own interface using streamlit and load models myself via an API
2023-09-11T19:03:33
https://www.reddit.com/r/LocalLLaMA/comments/16g3zvy/what_local_client_can_i_use_to_load_a_local_llama/
countrycruiser
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16g3zvy
false
null
t3_16g3zvy
/r/LocalLLaMA/comments/16g3zvy/what_local_client_can_i_use_to_load_a_local_llama/
false
false
self
3
null
Raspberry Pi 8GB running TinyLlama, can anyone report the user experience?
12
I suck and my unit has not shipped, and it's burning a hole in my brain. I was just wondering if anyone has, or could, try to run the new TinyLlama 1.1B model on the beefy 8GB RAM raspberry pi and let us all know what that is like? Shameful bribery be damned, you'll have my upvote!
2023-09-11T18:48:22
https://www.reddit.com/r/LocalLLaMA/comments/16g3kla/raspberry_pi_8gb_running_tinyllama_can_anyone/
Actual-Bad5029
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16g3kla
false
null
t3_16g3kla
/r/LocalLLaMA/comments/16g3kla/raspberry_pi_8gb_running_tinyllama_can_anyone/
false
false
self
12
null
Weird Response
1
So, I used the bloke's 13b 128k model. I used prompt Alpaca-with-input. In the instruction I added context of a document then the q and a below was my result??? Not sure what to do..... ### Input: Will you list the steps in a easy to read format? ### Response: 1031cm1agtOtO01the(OetOetO5thedtO61thO01750tOtO071thetO76tOpro1etOtO5tOtOtOcce1tO1tOtOthetO01tO1p13501tO5411tOtO1tOcfctO7m0c01e1e01tO0tO1tO7411112c0 e101e0211tO 6300002122tO0022122c6 tthe014 c71113001tO4 c1m5c2c2
2023-09-11T18:47:52
https://www.reddit.com/r/LocalLLaMA/comments/16g3k3x/weird_response/
Leading-Leading6718
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16g3k3x
false
null
t3_16g3k3x
/r/LocalLLaMA/comments/16g3k3x/weird_response/
false
false
self
1
null
Can you run anything on. 4070 ti with 64GB ram?
2
Just curious
2023-09-11T18:42:26
https://www.reddit.com/r/LocalLLaMA/comments/16g3evj/can_you_run_anything_on_4070_ti_with_64gb_ram/
livekop
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16g3evj
false
null
t3_16g3evj
/r/LocalLLaMA/comments/16g3evj/can_you_run_anything_on_4070_ti_with_64gb_ram/
false
false
self
2
null
GGUF.js - open-source JS library (with types) for parsing and reading metadata of ggml-based gguf files.
35
2023-09-11T18:28:53
https://github.com/ahoylabs/gguf.js
719Ben
github.com
1970-01-01T00:00:00
0
{}
16g31mz
false
null
t3_16g31mz
/r/LocalLLaMA/comments/16g31mz/ggufjs_opensource_js_library_with_types_for/
false
false
https://a.thumbs.redditm…eD_b9NRaHFa0.jpg
35
{'enabled': False, 'images': [{'id': 'A9wzb3e5XrBjNIDBenzLtzoFwGMOL7_OnnybFWABZLw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4Ym9B9n7nm2mkyrxSZ-TeyesRQaC6Zx8IgycsW_x-Qw.jpg?width=108&crop=smart&auto=webp&s=6736eb169a426bb2535fa804be8dd12d1a472edc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4Ym9B9n7nm2mkyrxSZ-TeyesRQaC6Zx8IgycsW_x-Qw.jpg?width=216&crop=smart&auto=webp&s=dad2ac53b9e7927aeef2f7f5d569680795d8fa73', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4Ym9B9n7nm2mkyrxSZ-TeyesRQaC6Zx8IgycsW_x-Qw.jpg?width=320&crop=smart&auto=webp&s=fc722fcb0dbedf53339f485af3d696adb356b98c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4Ym9B9n7nm2mkyrxSZ-TeyesRQaC6Zx8IgycsW_x-Qw.jpg?width=640&crop=smart&auto=webp&s=fdd652b798b4fc20b84fed5b07e48a570711e03c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4Ym9B9n7nm2mkyrxSZ-TeyesRQaC6Zx8IgycsW_x-Qw.jpg?width=960&crop=smart&auto=webp&s=395b4a6e26ab6c48f1159a328cdda68a85749202', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4Ym9B9n7nm2mkyrxSZ-TeyesRQaC6Zx8IgycsW_x-Qw.jpg?width=1080&crop=smart&auto=webp&s=f4c3439b2204c9e41507130af3ae14092c43da1a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4Ym9B9n7nm2mkyrxSZ-TeyesRQaC6Zx8IgycsW_x-Qw.jpg?auto=webp&s=1c48b696b7f996df789ccdbb9a3f29a6ea30ec6b', 'width': 1200}, 'variants': {}}]}
Making a chatbot for my wife's mom
17
My MIL passed away almost two years ago, and she and my wife were very close. They chatted quite a bit every day on Facebook messenger for more than a decade, and I'd like to make a bot version. (I have already talked to my wife about this and gotten approval.) I have access to the raw chat logs, which are in html format. I wanted to make sure I understood the workflow for finetuning LLaMA. Answers to the specific questions below and generat tips are appreciated! 1. I have an M2, but my understanding is it's hard to do finetuning locally on Macs now. So I imagine my best bet is to use Colab with autotrain. The final command would be something like `!autotrain llm --train --project_name MILbot --model meta-llama/Llama-2-13b-chat-hf --data_path . --lora_r=32 --lora_alpha=64 --model_max_length=4096 --text_column text --use_peft --use_int8 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 5 --block_size 4096 --trainer sft --push_to_hub --repo_id myname/myrepo` 2. Should I get the data into a csv or json file? I think csv for autotrain, but I'm not sure. 3. Is 13b-chat my best bet here? Also, should I use the sft trainer or something else? 4. I want to make sure I'm processing the data correctly, and I'm kind of confused about this part. My understanding is I should break it up into roughly 4k token chunks. And each chunk should have something like the following. `<s>[INST] <<SYS>> You are playing the role of the mother [insert details] <</SYS>> Hi, mom! [/INST] Hi, how are you? </s><s>[INST] I'm good. [/INST] Great </s><s>`
2023-09-11T18:13:46
https://www.reddit.com/r/LocalLLaMA/comments/16g2mx7/making_a_chatbot_for_my_wifes_mom/
eumaximizer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16g2mx7
false
null
t3_16g2mx7
/r/LocalLLaMA/comments/16g2mx7/making_a_chatbot_for_my_wifes_mom/
false
false
self
17
null
New way to speed up inference! Easier than speculative decoding
128
“Medusa adds extra "heads" to LLMs to predict multiple future tokens simultaneously. When augmenting a model with Medusa, the original model stays untouched, and only the new heads are fine-tuned during. During generation, these heads each produce multiple likely words for the corresponding position. These options are then combined and processed using a tree-based attention mechanism. Finally, a typical acceptance scheme is employed to pick the longest plausible prefix from the candidates for further decoding.” https://github.com/FasterDecoding/Medusa
2023-09-11T17:58:18
https://www.reddit.com/r/LocalLLaMA/comments/16g27s0/new_way_to_speed_up_inference_easier_than/
big_ol_tender
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16g27s0
false
null
t3_16g27s0
/r/LocalLLaMA/comments/16g27s0/new_way_to_speed_up_inference_easier_than/
false
false
self
128
{'enabled': False, 'images': [{'id': 'ZaGgACbs-Ed1psgFwxZV06yT9YK_x40PNcljpwPPnN4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wpcJ4kaM6KKhxPUIkFfOGsa5ACGVURFmlt2O5ia8PqM.jpg?width=108&crop=smart&auto=webp&s=2b63bdf51487d798c8c81d77af6c523565e197d9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wpcJ4kaM6KKhxPUIkFfOGsa5ACGVURFmlt2O5ia8PqM.jpg?width=216&crop=smart&auto=webp&s=afb991406b4e1bbfc6e4c15e09b8d03ba5eb1da1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wpcJ4kaM6KKhxPUIkFfOGsa5ACGVURFmlt2O5ia8PqM.jpg?width=320&crop=smart&auto=webp&s=6b479ff8a4829eef4c41e9d579a1b2d8bcff489d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wpcJ4kaM6KKhxPUIkFfOGsa5ACGVURFmlt2O5ia8PqM.jpg?width=640&crop=smart&auto=webp&s=e6d55a279b69eecd3dd410370fc14e1e6ddd81fe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wpcJ4kaM6KKhxPUIkFfOGsa5ACGVURFmlt2O5ia8PqM.jpg?width=960&crop=smart&auto=webp&s=723556a3d3f7c2f2d7d2545b9e5a0fd071a97ceb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wpcJ4kaM6KKhxPUIkFfOGsa5ACGVURFmlt2O5ia8PqM.jpg?width=1080&crop=smart&auto=webp&s=7de6cd9d7d31111169025c6404837035d962b9a6', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wpcJ4kaM6KKhxPUIkFfOGsa5ACGVURFmlt2O5ia8PqM.jpg?auto=webp&s=6a1c084782fca21e545e7f3a7d76f6dad864830c', 'width': 1200}, 'variants': {}}]}
H2OGPT not saving documents in database
1
Hi, I'm new to this so I may be doing this wrong but on the Windows 11 version of H2OGPT I can upload my documents and create a collection in LangChain Mode-Path. However whenever I restart the documents disappear. This is a pain as I am trying to analyse my own documents. I've clicked on 'Update DB with new/changed files on disc'. Also does it matter if I add documents before or after I load a model? I've tried it with no model and with a model loaded. Speaking of which is it possible to set a default model using the one-click installer of H2OGPT?
2023-09-11T17:57:09
https://www.reddit.com/r/LocalLLaMA/comments/16g26qi/h2ogpt_not_saving_documents_in_database/
ScriptReaderStudio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16g26qi
false
null
t3_16g26qi
/r/LocalLLaMA/comments/16g26qi/h2ogpt_not_saving_documents_in_database/
false
false
self
1
null
Serving qlora fine-tuned models in production
1
vLLM does not support qlora in production. What are the available approaches at the moment until quantization support is added to vLLM?
2023-09-11T16:03:07
https://www.reddit.com/r/LocalLLaMA/comments/16fz5q8/serving_qlora_finetuned_models_in_production/
ComplexIt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fz5q8
false
null
t3_16fz5q8
/r/LocalLLaMA/comments/16fz5q8/serving_qlora_finetuned_models_in_production/
false
false
self
1
null
The 4 Essential Dataset Types for LLMs: A Deep Dive
1
[removed]
2023-09-11T15:58:09
https://www.reddit.com/r/LocalLLaMA/comments/16fz0qb/the_4_essential_dataset_types_for_llms_a_deep_dive/
l33thaxman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fz0qb
false
null
t3_16fz0qb
/r/LocalLLaMA/comments/16fz0qb/the_4_essential_dataset_types_for_llms_a_deep_dive/
false
false
self
1
{'enabled': False, 'images': [{'id': 'wl5u9lohlRi8mmgYaIC60x2EUTbSOKC6HdZOnzkjtww', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/OD_NLm9D4wFz0ynPHin3ue0jsjubjVgfpmYRXjyMGDM.jpg?width=108&crop=smart&auto=webp&s=31ddc4662e5530704537cddcf8045c12e59faa6e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/OD_NLm9D4wFz0ynPHin3ue0jsjubjVgfpmYRXjyMGDM.jpg?width=216&crop=smart&auto=webp&s=5186ce9cf29bc3dfdf400ed3d2fddbeb10977337', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/OD_NLm9D4wFz0ynPHin3ue0jsjubjVgfpmYRXjyMGDM.jpg?width=320&crop=smart&auto=webp&s=14eeda42ea736899c16e09b489d12980fcc46fc2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/OD_NLm9D4wFz0ynPHin3ue0jsjubjVgfpmYRXjyMGDM.jpg?auto=webp&s=bb979a453ce82bc9534e9ba52e9c446a8e938405', 'width': 480}, 'variants': {}}]}
Does CPU bottleneck GPU in GPTQ models?
0
Will a i7-4770 with an RTX 3090 bottleneck the GPU using GPTQ models in textgen? &#x200B;
2023-09-11T14:29:11
https://www.reddit.com/r/LocalLLaMA/comments/16fwqz7/does_cpu_bottleneck_gpu_in_gptq_models/
Imagummybear23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fwqz7
false
null
t3_16fwqz7
/r/LocalLLaMA/comments/16fwqz7/does_cpu_bottleneck_gpu_in_gptq_models/
false
false
self
0
null
ContentatScale and Winston AI as alternatives to Turnitin AI
0
As the title says, can these tools be used as alternatives to Turnitin's AI detector? Can anyone speak with any previous experience regarding this? Thank you.
2023-09-11T14:25:32
https://www.reddit.com/r/LocalLLaMA/comments/16fwnxa/contentatscale_and_winston_ai_as_alternatives_to/
psj_2908
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fwnxa
false
null
t3_16fwnxa
/r/LocalLLaMA/comments/16fwnxa/contentatscale_and_winston_ai_as_alternatives_to/
false
false
self
0
null
Noob question, how to begin? Questions for the a first time training/running.
12
Hi! I´ve been reading and seeing some videos. However I still have some doubts about the best path and any tips would be greatly appreciated. 1. What do you guys think is the easy model to run at first and them try some training on local data? 2. I would like to train the model using several research paper´s that I will use as reference, lab reports, and my thesis. Thus the model would be able, to respond questions based on it. Does anyone know a guide/tutorial? 3. I´ve a 4090 and 128GB of ram. In terms of OS ( Windows, linux, docker,...) what is the easy way, do you guys think to begin, and learn? 4. Do you guys know any compilation of available models and characteristics? &#x200B; Thank you guys.
2023-09-11T14:24:58
https://www.reddit.com/r/LocalLLaMA/comments/16fwndq/noob_question_how_to_begin_questions_for_the_a/
No_One_BR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fwndq
false
null
t3_16fwndq
/r/LocalLLaMA/comments/16fwndq/noob_question_how_to_begin_questions_for_the_a/
false
false
self
12
null
Are PCI-E 4.0 x16 and PCI-E 4.0 x4 good enough for 2 GPUs to run LLMs?
10
My motherboard is MSI X670P Wifi which has a PCI-E 4.0 x16 and a PCI-E 4.0 x4. I read somewhere that you need at least x8 speed to run LLMs. Could anyone confirm this? Any advice is also welcome!
2023-09-11T13:12:01
https://www.reddit.com/r/LocalLLaMA/comments/16fuxl1/are_pcie_40_x16_and_pcie_40_x4_good_enough_for_2/
tgredditfc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fuxl1
false
null
t3_16fuxl1
/r/LocalLLaMA/comments/16fuxl1/are_pcie_40_x16_and_pcie_40_x4_good_enough_for_2/
false
false
self
10
null
How To Develop A Token Streaming UI For Your LLaMA model With Go, FastAPI And JS
1
I have been struggling with token streaming for a while. Now that I have something solid I thought I would share how I did it because I couldn't find so many useful resources about it on the web... Basically I needed to build a nice interface for my LLaMA models, allowing me to see the text showing up dynamically on the screen (aka "token streaming"). Same as what we can see on the ChatGPT UI. This is pretty useful because these language models can be so slow that waiting for the whole response to be ready can be a pain... As I am expecting quite a lot of users to be playing with the UI concurrently, I thought that Go would be a good choice (and I haven't been disappointed so far). So basically my stack is the following: * Creating an "Event Source" in the browser in Javascript in order to read Server Sent Events (SSE) * Render the HTML page with Go, read the user request, forward it to the language model backend, and forward the streamed tokens to the JS frontend with SSE as soon as they arrive * Deploy the language model with Python and a framework allowing for token streaming like Hugging Face Transformers * Add a small FastAPI interface on top of the large language model in order to communicate with the Go frontend Here is the detailed how-to: [https://nlpcloud.com/how-to-develop-a-token-streaming-ui-for-your-llm-with-go-fastapi-and-js.html](https://nlpcloud.com/how-to-develop-a-token-streaming-ui-for-your-llm-with-go-fastapi-and-js.html?utm_source=reddit&utm_campaign=i859w625-3816-81ed-a261-0242ac140019) It works very well, but maybe there exists an even better setup? In that case I would love to hear your suggestions! Julien
2023-09-11T13:09:38
https://www.reddit.com/r/LocalLLaMA/comments/16fuvn8/how_to_develop_a_token_streaming_ui_for_your/
juliensalinas
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fuvn8
false
null
t3_16fuvn8
/r/LocalLLaMA/comments/16fuvn8/how_to_develop_a_token_streaming_ui_for_your/
false
false
self
1
{'enabled': False, 'images': [{'id': 'wlI6wBz5sfDYABqSyw01yUccWpHrhKSmweaoFT-WH_g', 'resolutions': [{'height': 75, 'url': 'https://external-preview.redd.it/e51Abb4zSfrt-dfj0Ra5LgfHVrzz6XRX9aE-5wDXt1U.jpg?width=108&crop=smart&auto=webp&s=4fbc4c914252035a68763934e2d5991b0146765b', 'width': 108}, {'height': 151, 'url': 'https://external-preview.redd.it/e51Abb4zSfrt-dfj0Ra5LgfHVrzz6XRX9aE-5wDXt1U.jpg?width=216&crop=smart&auto=webp&s=19c2d28802ad13441ca3f1710043d3d2a2add35d', 'width': 216}, {'height': 224, 'url': 'https://external-preview.redd.it/e51Abb4zSfrt-dfj0Ra5LgfHVrzz6XRX9aE-5wDXt1U.jpg?width=320&crop=smart&auto=webp&s=b4be43254c7e7d38476ca54a1c1266383f89e772', 'width': 320}], 'source': {'height': 421, 'url': 'https://external-preview.redd.it/e51Abb4zSfrt-dfj0Ra5LgfHVrzz6XRX9aE-5wDXt1U.jpg?auto=webp&s=bc7dff0c9d4602528d596bd20f171aadfaae7da4', 'width': 600}, 'variants': {}}]}
Integrating llama 2 in word processor?
5
I was wondering if there is any way to integrate Llama 2 with a word processor, such as Microsoft Word or Google Docs, so that I can use it to help write and fleah out documents. I think it would be very helpful to have Llama 2 as a writing assistant that can generate content, suggest improvements, or check grammar and spelling. I have used llama locally for some time, but only in chatlike settings, This is a far cry from an integrated assistant I imagine is possible. I searched and couldn't find any project doing this. So, does anyone know if there is any integration of Llama 2 with a word processor? And if not, why not? Is it because of technical difficulties, or lack of demand? I would love to hear your thoughts and opinions on this topic. Thanks in advance!
2023-09-11T13:06:59
https://www.reddit.com/r/LocalLLaMA/comments/16futml/integrating_llama_2_in_word_processor/
EmbarrassedIce9048
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16futml
false
null
t3_16futml
/r/LocalLLaMA/comments/16futml/integrating_llama_2_in_word_processor/
false
false
self
5
null
OmniQuant of Falcon-180B has been released!
117
ERROR: type should be string, got " https://github.com/OpenGVLab/OmniQuant\n\n \nNews\n\n\n[2023/09] 🔥 We have expanded support for Falcon. OmniQuant efficiently compresses Falcon-180b from 335G to 65G, with minimal performance loss. Furthermore, this compression allows for Falcon-180b inference on a single A100 80GB GPU. For details, refer to [runing_falcon180b_on_single_a100_80g](https://github.com/OpenGVLab/OmniQuant/blob/main/runing_falcon180b_on_single_a100_80g.ipynb).\n\n\nhttps://i.imgur.com/11HKigM.png"
2023-09-11T12:33:36
https://www.reddit.com/r/LocalLLaMA/comments/16fu45d/omniquant_of_falcon180b_has_been_released/
ittu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fu45d
false
null
t3_16fu45d
/r/LocalLLaMA/comments/16fu45d/omniquant_of_falcon180b_has_been_released/
false
false
self
117
{'enabled': False, 'images': [{'id': 'GzPPByNW1RDV8D8rRvkNFHFjESaZGE4vOXGk9NCS_nI', 'resolutions': [{'height': 14, 'url': 'https://external-preview.redd.it/xEw8GsZu9zkRWYNrK5ZT6WYVsxrokFic8WYY2hDTXn0.png?width=108&crop=smart&auto=webp&s=6c543155daf3777c34fdbdc5ea06812d3d6acb46', 'width': 108}, {'height': 28, 'url': 'https://external-preview.redd.it/xEw8GsZu9zkRWYNrK5ZT6WYVsxrokFic8WYY2hDTXn0.png?width=216&crop=smart&auto=webp&s=f617060b4fa0af1bf2f94ebd3f8690c68a015f71', 'width': 216}, {'height': 42, 'url': 'https://external-preview.redd.it/xEw8GsZu9zkRWYNrK5ZT6WYVsxrokFic8WYY2hDTXn0.png?width=320&crop=smart&auto=webp&s=d5528033f85531d595ee19f22681dfd68d47de6e', 'width': 320}, {'height': 85, 'url': 'https://external-preview.redd.it/xEw8GsZu9zkRWYNrK5ZT6WYVsxrokFic8WYY2hDTXn0.png?width=640&crop=smart&auto=webp&s=8b8610a329b0b9b279e8c3e7d6631521533739c5', 'width': 640}, {'height': 128, 'url': 'https://external-preview.redd.it/xEw8GsZu9zkRWYNrK5ZT6WYVsxrokFic8WYY2hDTXn0.png?width=960&crop=smart&auto=webp&s=f6bf81ed3999d6d8a86cd2f649156db732ea9b6f', 'width': 960}, {'height': 144, 'url': 'https://external-preview.redd.it/xEw8GsZu9zkRWYNrK5ZT6WYVsxrokFic8WYY2hDTXn0.png?width=1080&crop=smart&auto=webp&s=d68a62a2ce0666c20cf0f8ea3421aa0146766f34', 'width': 1080}], 'source': {'height': 289, 'url': 'https://external-preview.redd.it/xEw8GsZu9zkRWYNrK5ZT6WYVsxrokFic8WYY2hDTXn0.png?auto=webp&s=b01255093a55c6e649f7317fe659ab1d24e9e431', 'width': 2155}, 'variants': {}}]}
Using RTX-6000 to fine-tune Vicuna
2
I am trying to fine-tune vicuna 7b following [this guide](https://github.com/lm-sys/FastChat) but I get a value error stating it is only possible using A-series or V-series gpu. My question is, do I need these type of GPU to fine tune my model or is there any work-around?
2023-09-11T12:24:51
https://www.reddit.com/r/LocalLLaMA/comments/16ftxkj/using_rtx6000_to_finetune_vicuna/
insane-defaults
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ftxkj
false
null
t3_16ftxkj
/r/LocalLLaMA/comments/16ftxkj/using_rtx6000_to_finetune_vicuna/
false
false
self
2
{'enabled': False, 'images': [{'id': '8gdDInq_TmmDruxFNNtiNzd7qvBrHHzZ_cTL2Iryz5c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dppW8DVYXMmkPpSi7A_E3WPesqHS5TOVGQAIwtkFmuU.jpg?width=108&crop=smart&auto=webp&s=fc5d9e1183c2dd198b19bb9bea9537ef0c7f0898', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dppW8DVYXMmkPpSi7A_E3WPesqHS5TOVGQAIwtkFmuU.jpg?width=216&crop=smart&auto=webp&s=6e41e48c4b1e6365acb3dbf53212b1352bb7be44', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dppW8DVYXMmkPpSi7A_E3WPesqHS5TOVGQAIwtkFmuU.jpg?width=320&crop=smart&auto=webp&s=d8ef26860475e11896b17c53190941a4ff949735', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dppW8DVYXMmkPpSi7A_E3WPesqHS5TOVGQAIwtkFmuU.jpg?width=640&crop=smart&auto=webp&s=5ee4a426477824ab8bcc97d8df554d9312cbe932', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dppW8DVYXMmkPpSi7A_E3WPesqHS5TOVGQAIwtkFmuU.jpg?width=960&crop=smart&auto=webp&s=d54fb00c816d54e0bdf92bc88116673d32a35f95', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dppW8DVYXMmkPpSi7A_E3WPesqHS5TOVGQAIwtkFmuU.jpg?width=1080&crop=smart&auto=webp&s=6f1d770aa0c42734b4ab0e953f8f80daacc56604', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dppW8DVYXMmkPpSi7A_E3WPesqHS5TOVGQAIwtkFmuU.jpg?auto=webp&s=606ff585400cb39017efe7f03b5f326465248af2', 'width': 1200}, 'variants': {}}]}
70B build at $4000
36
Can anyone recommend the hardware for running a 70B model (maybe \`garage-bAInd/Platypus2-70B-instruct\`)? Are 2x3090s my best bet? Memory (RAM) seems cheap enough so I'd think to throw 256 GB at it? Is there a point when enough is enough? Does CPU speed make a big difference or is it purely GPU bound?
2023-09-11T11:06:35
https://www.reddit.com/r/LocalLLaMA/comments/16fsg44/70b_build_at_4000/
flemhans
self.LocalLLaMA
1970-01-01T00:00:00
1
{'gid_2': 1}
16fsg44
false
null
t3_16fsg44
/r/LocalLLaMA/comments/16fsg44/70b_build_at_4000/
false
false
self
36
null
What is the best LLM for every weight?
0
What is the best LLM in the 1b variant as for 3b and 7b 13b 30b and 70b? and why not 256m and 512m 768m too.
2023-09-11T10:47:33
https://www.reddit.com/r/LocalLLaMA/comments/16fs3ip/what_is_the_best_llm_for_every_weight/
Puzzleheaded_Acadia1
self.LocalLLaMA
1970-01-01T00:00:00
1
{'gid_2': 1}
16fs3ip
false
null
t3_16fs3ip
/r/LocalLLaMA/comments/16fs3ip/what_is_the_best_llm_for_every_weight/
false
false
self
0
null
How to use models on local, w-out downloading with code?
2
Hi everyone! I've been working on Llama, and I made it work. However, this was my model: &#x200B; &#x200B; `model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf",` `device_map ='auto',` `torch_dtype = torch.float16,` `use_auth_token = True)` &#x200B; As you can see, this is not a quantized or optimized version, so since I run on "NVIDIA GeForce RTX 2080 TI", every query is taking so long (20 mins), even I already made Chroma to save embeddings. &#x200B; Therefore, I was trying to use a lighter model, specifically "TheBloke/Llama-2-7B-GGUF", by only changing the parameter above function. Then, I got this error &#x200B; `oserror: thebloke/llama-2-7b-gguf does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.` &#x200B; &#x200B; From the answer online, I thought that I should download the model files directly to my local. However, I could not find anything how to run models on directly local, without pulling them from HF with code. What should I do? &#x200B;
2023-09-11T10:10:30
https://www.reddit.com/r/LocalLLaMA/comments/16frh5n/how_to_use_models_on_local_wout_downloading_with/
JavaMaster420
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16frh5n
false
null
t3_16frh5n
/r/LocalLLaMA/comments/16frh5n/how_to_use_models_on_local_wout_downloading_with/
false
false
self
2
null
Are there any node-based GUIs for LMMs (like comfyui for sd)?
2
I'd love to semi-automate workflow(s) that involve multiple multi-turn conversations with minimal programming for brainstorming and model testing. Comfyui seems just perfect for what I want, but maybe you can do something like this in ooba or silly tavern?
2023-09-11T10:08:49
https://www.reddit.com/r/LocalLLaMA/comments/16frg4w/are_there_any_nodebased_guis_for_lmms_like/
BalorNG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16frg4w
false
null
t3_16frg4w
/r/LocalLLaMA/comments/16frg4w/are_there_any_nodebased_guis_for_lmms_like/
false
false
self
2
null
Increase PrivateGPT response length
0
The current response often falls short of my need. Does anyone here know how can the response length be increased in PrivateGPT?
2023-09-11T10:03:11
https://www.reddit.com/r/LocalLLaMA/comments/16frclm/increase_privategpt_response_length/
mohityadavx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16frclm
false
null
t3_16frclm
/r/LocalLLaMA/comments/16frclm/increase_privategpt_response_length/
false
false
self
0
null
P40 as upgrade to slow PC?
6
With interest I've been playing around with a bit of LLM generative text and Stable Diffusion. However, I've been using an old repurposed PC and while I am able to run things, performance is as slow as one might expect with the components I have at my disposal. I am considering either replacing the PC with something built up from scratch with passable performance in mind (i.e. better than the current 0.6t/s; 20min per 512 x 512 image), or upgrading it with a GPU as this is the obvious missing component as it currently stands. In deciding between a 12GB 3060 or a 16GB 4060 TI, a P40 entered the mix. This is the least expensive option and offers the most VRAM. As this is a hobby that I'm messing around with, I'd prefer not to overextend financially until I know that I'd want to dive deeper into this world. My query relates to the effectiveness of the P40 against the other two GPUs, and if the age and low-end components of the existing PC are likely to introduce a new bottleneck despite having a P40 in the mix. Current specs: * Core i3-4130 * 16GB DDR3 1600MHz (13B q5 GGML is possible) * 128GB SATA SSD (this will be upgraded to 512GB soon in any event) * PCIE 3.0 Your advice would be most appreciated.
2023-09-11T09:59:23
https://www.reddit.com/r/LocalLLaMA/comments/16fra51/p40_as_upgrade_to_slow_pc/
OdinSA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fra51
false
null
t3_16fra51
/r/LocalLLaMA/comments/16fra51/p40_as_upgrade_to_slow_pc/
false
false
self
6
null
Getting crazy high loss finetuning llama2-7b on unstructured data
6
&#x200B; https://preview.redd.it/rxfl6h5zclnb1.png?width=1305&format=png&auto=webp&s=017662615e9e9e61b266b566ac752684d2550a5e https://preview.redd.it/ssszygu7dlnb1.png?width=1663&format=png&auto=webp&s=75c32774790190aaee5c9d7c464cdf0d75f5d0f4 &#x200B; Running lous-research/llama-2-7b-hf in oobabooga via Transformers in 4-bit with double quantization. Running in colab. I'm still learning fine-tuning, so I'm sure its a user error, but I'm after some pointers as to why I'm getting insanely high loss that seemingly keeps increasing when I set my rank to > 256? I've had sucess with smaller numbers than that, but I read that increasing rank greatly helps the model learn new information. The dataset is skyrim lore scraped from the wiki, loaded in as a text file. Any pointers would be amazing!
2023-09-11T09:07:03
https://www.reddit.com/r/LocalLLaMA/comments/16fqfph/getting_crazy_high_loss_finetuning_llama27b_on/
Goatman117
self.LocalLLaMA
1970-01-01T00:00:00
1
{'gid_2': 1}
16fqfph
false
null
t3_16fqfph
/r/LocalLLaMA/comments/16fqfph/getting_crazy_high_loss_finetuning_llama27b_on/
false
false
https://b.thumbs.redditm…gaQoq6xIyaaE.jpg
6
null
Optimizing 'airoboros-l2-?b-gpt4-2.0' for Limited Resources: Seeking Guidance
0
Hey everyone, I'm facing a challenging issue and could really use your help. Here's the situation: My Setup: CPU: AMD Ryzen R5 3600 RAM: 8GB (with a 30GB swap file) GPU: Nvidia RTX 3060 Ti OS: Ubuntu 22.04 (Linux Lite) Nvidia drivers: Version 470 (CUDA 11.4) The Problem: I'm working with the 'airoboros-l2-13b-gpt4-2.0' and 'airoboros-l2-7b-gpt4-m2.0' model using vLLM. I keep encountering CUDA out-of-memory errors. Recently, I ran into a mysterious "Magic no. error." What I've Tried So Far: I tweaked the 'config.json' file. Adjusted parameters like 'hidden\_size,' 'num\_hidden\_layers,' and 'num\_attention\_heads' to reduce model size. Where I Need Help: Understanding the Problem: Can someone help me break down these CUDA out-of-memory errors and the "Magic no. error"? Optimizing 'config.json': I experimented, but maybe there are better settings for my hardware. First Principles Approach: Let's start from scratch. How can we ensure the model runs efficiently on my setup? Monitoring GPU Resources: What tools or techniques can I use to keep track of GPU memory usage? Community Knowledge: Share your experiences. Let's build a collaborative space where we all learn together. If you've faced similar challenges or have experience with optimizing models for limited GPU resources, your insights would be greatly appreciated. Your assistance could not only help me but also benefit anyone working with resource-intensive models. Together, we'll conquer this challenge and make the most of our hardware. Thanks for your help in advance. I'm looking forward to our discussion!
2023-09-11T07:56:17
https://www.reddit.com/r/LocalLLaMA/comments/16fpce9/optimizing_airoborosl2bgpt420_for_limited/
ravimohankhanna7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fpce9
false
null
t3_16fpce9
/r/LocalLLaMA/comments/16fpce9/optimizing_airoborosl2bgpt420_for_limited/
false
false
self
0
null
Quality of vicuna-13B-v1.5-16K-GGUF using LM Studio supersedes any model with text-generation-webui - Is this a good thing?
50
Dear Redditors, I have been trying a number of LLM models on my machine that are in the 13B parameter size to identify which model to use. Now I have 12GB of VRAM so I wanted to test a bunch of 30B models in a tool called LM Studio ([https://lmstudio.ai/](https://lmstudio.ai/)) which I found by looking into the descriptions of theBloke's models. Anyway, fast forward to yesterday. I tested models including: * TheBloke/WizardLM-1.0-Uncensored-Llama2-13B-GPTQ * TheBloke/WizardMath-13B-V1.0-GPTQ * TheBloke/WizardLM-13B-V1.2-GPTQ * TheBloke/OpenOrca-Platypus2-13B-GPTQ *(gptq-4bit-32g-actorder\_True)* * TheBloke/Airoboros-L2-13B-2.1-GGUF * TheBloke/mpt-30B-chat-GGML * TheBloke/vicuna-13B-v1.5-16K-GGUF *(q6\_k)* * TheBloke/vicuna-13B-v1.5-16K-GPTQ *(gptq-4bit-32g-actorder\_True)* And one of them was giving incredible results (**vicuna-13B-v1.5-16K-GGUF**)**, but only when used in LM Studio**. I tried putting it in `oobabooga/text-generation-webui` and launching via llama.cpp, but that did not work for some reason (generation speeds were like 1 word per minute, something was probably not configured well even though I had same `n_gpu` 35 with 12 threads as I was using in LM studio). I have even tried the **vicuna-13B-v1.5-16K-GPTQ** via AutoGPTQ which should theoretically give me same results as the same model of GGUF type but with even better speeds. But that was not the case unfortunately. I tried adjusting the configuration like temperature and other parameters to replicate same params by LM Studio, but I couldn't replicate the same results. It seems that LM Studio is using something different because I even tried the GGUF model with KoboldCpp and I didn't feel the results were as good, though that is not the subject because I want to use the GPTQ version in oobabooga for better speeds and mainly for the integration with Superbooga (ChromaDB). Below are some test results from both GGUF via LM Studio as well as GPTQ via Oobabooga of same model (TheBloke/vicuna-13B-v1.5-16K-GGUF, ...-GPTQ): [LM Studio with vicuna-13B-v1.5-16K-GGUF. I thought it hallucinated but then it was actually a real show.](https://preview.redd.it/77hq7gkopknb1.png?width=790&format=png&auto=webp&s=37edfe0f86a9ad541e0c1b2474d15314497eafce) Here is the config I used in LM Studio: { "name": "Config for Chat ID 1694362244592", "load_params": { "n_ctx": 16000, "n_batch": 512, "rope_freq_base": 10000, "rope_freq_scale": 0.25, "n_gpu_layers": 30, "use_mlock": true, "main_gpu": 0, "tensor_split": [ 0 ], "seed": -1, "f16_kv": true, "use_mmap": true }, "inference_params": { "n_threads": 8, "n_predict": -1, "top_k": 40, "top_p": 0.95, "temp": 0.8, "repeat_penalty": 1.1, "input_prefix": "USER:", "input_suffix": "ASSISTANT:", "antiprompt": [ "USER:" ], "pre_prompt": "A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request.", "seed": -1, "tfs_z": 1, "typical_p": 1, "repeat_last_n": 64, "frequency_penalty": 0, "presence_penalty": 0, "n_keep": 0, "logit_bias": {}, "mirostat": 0, "mirostat_tau": 5, "mirostat_eta": 0.1, "memory_f16": true, "multiline_input": false, "penalize_nl": true } } Oobabooga with GPTQ: https://preview.redd.it/ac5gqkhltknb1.png?width=1008&format=png&auto=webp&s=b17a56205f6e2f91c8c2c6768ce827ec2942f14c Here is the config I used for Oobabooga: https://preview.redd.it/b2xms0wksknb1.png?width=1257&format=png&auto=webp&s=a3fa3715a0a5ab30d5234f928f9bd3070d2899f4 https://preview.redd.it/0xkiy02nrknb1.png?width=2504&format=png&auto=webp&s=a7e175e34e801c63930981f8a3558c7337b725e5 https://preview.redd.it/ic0dekmprknb1.png?width=2518&format=png&auto=webp&s=30ce6f71d1e2934bf934c2d881f39e5733a19c75 One last thing, I thing I noticed that the perceived quality with ExLlama is way less than AutoGPTQ. **TLDR**; **Issue**: LM Studio gives much better results with **TheBloke/vicuna-13B-v1.5-16K**\-GGUF than Oobabooga does with **TheBloke/vicuna-13B-v1.5-16K**\-GPTQ. LM Studio might use some hidden parameters. **Action**: I recommend you to try to reproduce my results with same model or perhaps better bigger models. Also I wish for more assistance and support from you if you could guide me if you have been through such test results. Thank you very much for your support, everyone. I hope I was able to explain the issue, and if you have any questions, please reach out to me via comments or DM.
2023-09-11T07:14:14
https://www.reddit.com/r/LocalLLaMA/comments/16fop63/quality_of_vicuna13bv1516kgguf_using_lm_studio/
SuddenWerewolf7041
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fop63
false
null
t3_16fop63
/r/LocalLLaMA/comments/16fop63/quality_of_vicuna13bv1516kgguf_using_lm_studio/
false
false
https://b.thumbs.redditm…zmqINAP_g7kg.jpg
50
{'enabled': False, 'images': [{'id': 'BGXfaUMntPZWzYo99FfkrbqxveeayLkICP2FRV6iEYA', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=108&crop=smart&auto=webp&s=96da2c256b06310619199b215ff7567afa27ee58', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=216&crop=smart&auto=webp&s=dcf240044368f708dc0d750badeebda2aa691840', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=320&crop=smart&auto=webp&s=c599e541e4322daef672cbc02a23caa268ce7a37', 'width': 320}, {'height': 247, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=640&crop=smart&auto=webp&s=44654defcfaeda2a2f81d1711a6a01541805fe51', 'width': 640}, {'height': 370, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=960&crop=smart&auto=webp&s=827015bf3dd3f28e66d59efa9228d6b755907d6d', 'width': 960}, {'height': 416, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=1080&crop=smart&auto=webp&s=1382ec95526d9b1eec98203c2cb753b9c47060f2', 'width': 1080}], 'source': {'height': 440, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?auto=webp&s=e8f48766fae002673b469ec4740e89f7a0c7191f', 'width': 1140}, 'variants': {}}]}
Fine tuning Llama2 chat?
2
Anyone could guide me on how to fine tune Llama2 chat for CBT and mindfulness. Thanks xD
2023-09-11T06:40:29
https://www.reddit.com/r/LocalLLaMA/comments/16fo5qh/fine_tuning_llama2_chat/
Unalomesie
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fo5qh
false
null
t3_16fo5qh
/r/LocalLLaMA/comments/16fo5qh/fine_tuning_llama2_chat/
false
false
self
2
null
Python library for indexing and retrieving source code files through an integrated vector database (not mine)
4
2023-09-11T05:12:41
https://github.com/definitive-io/code-indexer-loop
alphakue
github.com
1970-01-01T00:00:00
0
{}
16fmnu0
false
null
t3_16fmnu0
/r/LocalLLaMA/comments/16fmnu0/python_library_for_indexing_and_retrieving_source/
false
false
https://b.thumbs.redditm…V9siyNuaVzXg.jpg
4
{'enabled': False, 'images': [{'id': '9SwJSW7iYkxW53nhhDNn54A1g60Xem5b6HczkP4o-dA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JqQPOvx0zi_N-1_8woMwbYGGLeeyDUFUpJqUuIlSps8.jpg?width=108&crop=smart&auto=webp&s=4888f9afd315018e62406c88573f4dd75cada0d4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JqQPOvx0zi_N-1_8woMwbYGGLeeyDUFUpJqUuIlSps8.jpg?width=216&crop=smart&auto=webp&s=c638521182d4f60da9bfa6f96694cfad71bd100f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JqQPOvx0zi_N-1_8woMwbYGGLeeyDUFUpJqUuIlSps8.jpg?width=320&crop=smart&auto=webp&s=97985c80c855330e9b65adf1ee505b1584af965d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JqQPOvx0zi_N-1_8woMwbYGGLeeyDUFUpJqUuIlSps8.jpg?width=640&crop=smart&auto=webp&s=cfa53cd0d8bcb5b1845c3b93902c383876be252d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JqQPOvx0zi_N-1_8woMwbYGGLeeyDUFUpJqUuIlSps8.jpg?width=960&crop=smart&auto=webp&s=745065ff65fcb98676503ee6d8c930db7371046d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JqQPOvx0zi_N-1_8woMwbYGGLeeyDUFUpJqUuIlSps8.jpg?width=1080&crop=smart&auto=webp&s=6f676ccc639c61406b173284480ab2908f6d89f3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JqQPOvx0zi_N-1_8woMwbYGGLeeyDUFUpJqUuIlSps8.jpg?auto=webp&s=9f2a5f1211bfe6e29b06341704cd57e871f9bcda', 'width': 1200}, 'variants': {}}]}
Could I run any model locally with a Geforce GTX 1660 Ti 6GB
1
Basically this, is there any worthwhile model that would run locally using an NVIDIA GeForce GTX 1660 Ti with 6GB of memory?
2023-09-11T04:52:30
https://www.reddit.com/r/LocalLLaMA/comments/16fma27/could_i_run_any_model_locally_with_a_geforce_gtx/
jimmc414
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fma27
false
null
t3_16fma27
/r/LocalLLaMA/comments/16fma27/could_i_run_any_model_locally_with_a_geforce_gtx/
false
false
self
1
null
13B model on 16GB RAM and 8GB VRAM?
10
Is that possible? I have a 3050 with 8 GB VRAM and 16 GB RAM. I've been searching everywhere trying to find a solution but I couldn't really find anything. and made some attempts myself, but I feel as if I'm doing something wrong since it took 3+ min to generate a reply and only get <1 tokens... For more info, I attempted to do it through oobabooga using llama.cpp and GGUF models. I played around with the settings and was able to load it up within some seconds, but it's just the generation part where I'm stuck. Is it supposed to take long to generate a response, or I'm just doing something wrong? Also, similar things have been happening with trying to load 7B GGUF models with llama.cpp. It'll also take a while to generate and get <1 tokens as well. I feel like I'm just doing something wrong in trying to load these models and it's starting to confuse me. I'm sorry if this is all over the place and confusing; I'm not the best at trying to explain myself.
2023-09-11T03:53:25
https://www.reddit.com/r/LocalLLaMA/comments/16fl5i0/13b_model_on_16gb_ram_and_8gb_vram/
Sensitive_Incident27
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fl5i0
false
null
t3_16fl5i0
/r/LocalLLaMA/comments/16fl5i0/13b_model_on_16gb_ram_and_8gb_vram/
false
false
self
10
null
Hi, I would love some help regarding building QA chat with Llama
1
I'm trying to build a chat for answering with document based knowledge. The model I trying to use, used Llama2 as base, and train on top of that base. How can I use this model for QA? like, I've seen many guide using llama2 as QA chat but I have no idea how can I add this weights to it. Here's what I'm trying to use [openthaigpt](https://openthaigpt.aieat.or.th/released-models-version-less-than-1.0.0-beta-greater-than-16-08-23). Any help is appreciated!
2023-09-11T03:14:14
https://www.reddit.com/r/LocalLLaMA/comments/16fke96/hi_i_would_love_some_help_regarding_building_qa/
PuddleCuddle9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fke96
false
null
t3_16fke96
/r/LocalLLaMA/comments/16fke96/hi_i_would_love_some_help_regarding_building_qa/
false
false
self
1
{'enabled': False, 'images': [{'id': 'OfpaNFu1iZoJvxuY32RzUhNW2udTRTIecyp5Wv1wqPU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/GPNhRlfiLJ3IVEEM7UcdJZEwoEfORFcs9zgNvV85MLA.jpg?width=108&crop=smart&auto=webp&s=16b290e08922b290c7c78b2def122d3129ded0cd', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/GPNhRlfiLJ3IVEEM7UcdJZEwoEfORFcs9zgNvV85MLA.jpg?width=216&crop=smart&auto=webp&s=f0fc17fd7c48a09879dcf81dcb1614ce2ebb5fe5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/GPNhRlfiLJ3IVEEM7UcdJZEwoEfORFcs9zgNvV85MLA.jpg?width=320&crop=smart&auto=webp&s=343af5c7e02e87e69ab8af5872da77c159a03715', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/GPNhRlfiLJ3IVEEM7UcdJZEwoEfORFcs9zgNvV85MLA.jpg?width=640&crop=smart&auto=webp&s=dcef01c57c7907d19b9e83fdf3be826640c0589e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/GPNhRlfiLJ3IVEEM7UcdJZEwoEfORFcs9zgNvV85MLA.jpg?width=960&crop=smart&auto=webp&s=b4e77b8d6c824620dffb4212491d4b2745948b0f', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/GPNhRlfiLJ3IVEEM7UcdJZEwoEfORFcs9zgNvV85MLA.jpg?width=1080&crop=smart&auto=webp&s=a74cbd44389f3d7bda0de6b72d1b150c89fda645', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/GPNhRlfiLJ3IVEEM7UcdJZEwoEfORFcs9zgNvV85MLA.jpg?auto=webp&s=3a324f8fe9a4a0bc284f3f14297f48aeea04a86d', 'width': 1200}, 'variants': {}}]}
Careful when training LORA using Alpaca format!
39
Llama tokenizer make different tokens if you use ### or \\n### and that makes all the difference in interference Details This is to inform anybody when you are training. I spent some time debugging this - because my training somehow sucked when I used ### Instruction ### Response, the interference was all wrong Short story: If you don't prepend \\n before ### instructions, the token will be different than if you put \\n before it as there is token 835 that is " ###" and 2277, 29937 that is ## and # https://preview.redd.it/as7r3il55jnb1.png?width=903&format=png&auto=webp&s=3b9ed1135b160baae25278b4185797362af90198 The consequences are obvious: during interference you will probably never be able to create token 835 but always 2277, 29937 So your training MAY be wrong if you trained with \[835\] ### Instructions and \[2277, 29937\] ### Response and obviously the result LORA will behave erratically. This is another reason that Alpaca format should probably NOT be used because it is prone to tokenizer bugs like this. It's a BUG in the system - just make sure you are aware of this!
2023-09-11T01:35:36
https://www.reddit.com/r/LocalLLaMA/comments/16fiabb/careful_when_training_lora_using_alpaca_format/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fiabb
false
null
t3_16fiabb
/r/LocalLLaMA/comments/16fiabb/careful_when_training_lora_using_alpaca_format/
false
false
https://b.thumbs.redditm…NKyjYFO3ukIA.jpg
39
null
Fine tuning LLaMA 2
1
I am running an instance of 4*24 GB GPUs. A single GPU can't load the 70b model but I wanted to fine tune it to my dataset. How would I go about this?
2023-09-11T01:32:45
https://www.reddit.com/r/LocalLLaMA/comments/16fi80q/fine_tuning_llama_2/
SaatvikRamani
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fi80q
false
null
t3_16fi80q
/r/LocalLLaMA/comments/16fi80q/fine_tuning_llama_2/
false
false
self
1
null
Code Llama Parameters
2
I have been playing with code Llama (the 7B python one). It does pretty well, but I don't understand what the parameters in the code mean and how I should modify them to work best on my hardware. I'm looking at the code in: [https://github.com/facebookresearch/codellama/blob/main/llama/generation.py](https://github.com/facebookresearch/codellama/blob/main/llama/generation.py). Unfortunately, there are no docs that come with it, so it's hard to tell. The constructor looks like: `def build(` `ckpt_dir: str,` `tokenizer_path: str,` `max_seq_len: int,` `max_batch_size: int,` `model_parallel_size: Optional[int] = None,` `)` The checkpoint and tokenizer make sense. What does max\_seq\_len, max\_batch\_size, and model\_parallel size mean? How should I set them? Then, generate looks like: `def generate(` `self,` `prompt_tokens: List[List[int]],` `max_gen_len: int,` `temperature: float = 0.6,` `top_p: float = 0.9,` `logprobs: bool = False,` `echo: bool = False,` `stop_token: Optional[int] = None,` `)` Prompt and temp make sense; what should I put at max\_gen\_len and top\_p? In particular, there is a check in the code (line 143, in the generatre function): `assert max_prompt_len <= params.max_seq_len` Why is this necessary?
2023-09-10T23:52:37
https://www.reddit.com/r/LocalLLaMA/comments/16ffxq4/code_llama_parameters/
beezlebub33
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ffxq4
false
null
t3_16ffxq4
/r/LocalLLaMA/comments/16ffxq4/code_llama_parameters/
false
false
self
2
{'enabled': False, 'images': [{'id': 'Jan1IqgWQFD57hGifKdQzDb1QzCkX_qFPj4rhliGk7k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sShGhnZGYhVxhkALwYpQehstq-bKaQcUJJzAQLRP-aw.jpg?width=108&crop=smart&auto=webp&s=bd935217c41e6ace4d0b7e0e320dd15352085a83', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sShGhnZGYhVxhkALwYpQehstq-bKaQcUJJzAQLRP-aw.jpg?width=216&crop=smart&auto=webp&s=b29571e338d708f4bc9867eae5f4cbf00223698a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sShGhnZGYhVxhkALwYpQehstq-bKaQcUJJzAQLRP-aw.jpg?width=320&crop=smart&auto=webp&s=106373ce378626bbd9b83e40dcd6d63963f45de5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sShGhnZGYhVxhkALwYpQehstq-bKaQcUJJzAQLRP-aw.jpg?width=640&crop=smart&auto=webp&s=245043fe1710b38285819901b759a4a71de1d4cc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sShGhnZGYhVxhkALwYpQehstq-bKaQcUJJzAQLRP-aw.jpg?width=960&crop=smart&auto=webp&s=fd249ce601d7f0470c9d5a86f4888f99570e7ff2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sShGhnZGYhVxhkALwYpQehstq-bKaQcUJJzAQLRP-aw.jpg?width=1080&crop=smart&auto=webp&s=4c7cf344242e3fdc78145a021235c61c8dfe9e59', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sShGhnZGYhVxhkALwYpQehstq-bKaQcUJJzAQLRP-aw.jpg?auto=webp&s=8328f4eb183ba4b423467495bf5165ac662bda74', 'width': 1200}, 'variants': {}}]}
How long does it take to load model normally?
3
For me, 13B model took about 200 seconds, and 180B took more than 1 hour, is this normal? https://preview.redd.it/x2s2h5o8linb1.png?width=740&format=png&auto=webp&s=3855574859154d60f2971f36b3e0acecc7134836
2023-09-10T23:44:27
https://www.reddit.com/r/LocalLLaMA/comments/16ffqk7/how_long_does_it_take_to_load_model_normally/
Defiant_Hawk_4731
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ffqk7
false
null
t3_16ffqk7
/r/LocalLLaMA/comments/16ffqk7/how_long_does_it_take_to_load_model_normally/
false
false
https://b.thumbs.redditm…O_4yoTLBUp5A.jpg
3
null
Is the 3060 with 12gb of ram okay for a LLM exclusive API server?
0
The goal is that I use this server on any and all applications I could need the ChatGPT API on in an effort to ditch it. I'm going to be running Llama2-13b-Chat and for a while I've been running it on CPU at about 6 t/s. My goal is to increase the speed as much as possible. Also, would it be a better idea to just setup GPTQ at this point? Or should I continue to use Llama.CPP and just offload layers to the GPU? Not sure how any of this works as I haven't even owned an NVIDIA GPU at one point. Thanks.
2023-09-10T23:39:23
https://www.reddit.com/r/LocalLLaMA/comments/16ffm6z/is_the_3060_with_12gb_of_ram_okay_for_a_llm/
-Plutonium-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16ffm6z
false
null
t3_16ffm6z
/r/LocalLLaMA/comments/16ffm6z/is_the_3060_with_12gb_of_ram_okay_for_a_llm/
false
false
self
0
null
Data Extraction using fine-tuned Llama or any other LLM?
0
Hey Reddit, I'm working on a tool to pull data from highly irregular Excel files. I've gotten reasonable results which is extremely fast with standard Python coding, but it's far from perfect due to the lack of standardized templates. Interestingly, when I tested ChatGPT-4 on a sample table, it did a decent job at data extraction. However, relying solely on GPT-4 has its downsides like token limits and slow processing speed (and data privacy issues). Plus, splitting the Excel sheet to fit within these limits results in loss of context and data. I'm considering fine-tuning a language model to post-process data that was in a Pandas DataFrame (perhaps converted to JSON). Has anyone had success with this approach or have alternative recommendations? I've tried Langchain, but it wasn't helpful. I have figured out to extract the relevant columns, but the post-processing part is where I am considering using an LLM which understands the domain and what needs to be extracted based on the examples I feed it. Looking forward to your thoughts! And would be happy to answer any additional questions.
2023-09-10T23:03:22
https://www.reddit.com/r/LocalLLaMA/comments/16feqwu/data_extraction_using_finetuned_llama_or_any/
rs35plus1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16feqwu
false
null
t3_16feqwu
/r/LocalLLaMA/comments/16feqwu/data_extraction_using_finetuned_llama_or_any/
false
false
self
0
null
Meta Is Developing a New, More Powerful AI System as Technology Race Escalates
117
2023-09-10T23:03:21
https://www.wsj.com/tech/ai/meta-is-developing-a-new-more-powerful-ai-system-as-technology-race-escalates-decf9451
hzj5790
wsj.com
1970-01-01T00:00:00
0
{}
16feqw6
false
null
t3_16feqw6
/r/LocalLLaMA/comments/16feqw6/meta_is_developing_a_new_more_powerful_ai_system/
false
false
https://a.thumbs.redditm…Al619IPnaPF8.jpg
117
{'enabled': False, 'images': [{'id': '55LVCrtKrMX3T9qfEtCyWuvCswUr0nN14F-1SUnZYxE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_OCGjFpHvh9aSJ6q8GO9mDHTOQ0Qf_5qaWb4AMcS9ss.jpg?width=108&crop=smart&auto=webp&s=e56e369ec93aca35a99de9830a8b608948668143', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_OCGjFpHvh9aSJ6q8GO9mDHTOQ0Qf_5qaWb4AMcS9ss.jpg?width=216&crop=smart&auto=webp&s=06ac40196c141266bc7c1ee4249ae9f83fbaf8d7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_OCGjFpHvh9aSJ6q8GO9mDHTOQ0Qf_5qaWb4AMcS9ss.jpg?width=320&crop=smart&auto=webp&s=b199d04c3397fde0272684dee156f865b3814170', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_OCGjFpHvh9aSJ6q8GO9mDHTOQ0Qf_5qaWb4AMcS9ss.jpg?width=640&crop=smart&auto=webp&s=5443ec481a38788cb6a79c5fd3dfcd310c58bdaa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_OCGjFpHvh9aSJ6q8GO9mDHTOQ0Qf_5qaWb4AMcS9ss.jpg?width=960&crop=smart&auto=webp&s=be476264f4fd8fa692f8b3434af8585c83abaa6b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_OCGjFpHvh9aSJ6q8GO9mDHTOQ0Qf_5qaWb4AMcS9ss.jpg?width=1080&crop=smart&auto=webp&s=c6184ddb9e985278b81b26b87bf3ae0003571ac6', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/_OCGjFpHvh9aSJ6q8GO9mDHTOQ0Qf_5qaWb4AMcS9ss.jpg?auto=webp&s=7ba094d3580f51ff65d9b304610595c57de92788', 'width': 1280}, 'variants': {}}]}
What type of model Joyland is using.
1
Joyland ai is using very interesting model, why it's interesting well it's uses few relpies I tried many models such as mythomax but those model give me story reply or whatever I want for causal chatting with a character, my question would be what model can do that?
2023-09-10T22:24:35
https://www.reddit.com/r/LocalLLaMA/comments/16fdsl4/what_type_of_model_joyland_is_using/
swwer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fdsl4
false
null
t3_16fdsl4
/r/LocalLLaMA/comments/16fdsl4/what_type_of_model_joyland_is_using/
false
false
self
1
null
Editing specific sections of documents?
1
Are there any pipelines or perhaps a Langchain chain that would allow me to use an LLM to identify and edit specific portions/sections of a document based on a query? I understand I can have the document indexed using an abrupt character split of a set number of characters, and edit the relevant chunk and re-append to the document, however if the content that is to be edited is spread across two chunks, I would end up having to regenerate both those chunks and re-appending to the original document. However I don't know how to include the context of the previous chunk to have a smooth continuation into the 2nd chunk. Hence my question, is there any implementation for this? Or a simpler, better approach that I am missing? Any resources or help is greatly appreciated.
2023-09-10T22:23:03
https://www.reddit.com/r/LocalLLaMA/comments/16fdr8r/editing_specific_sections_of_documents/
ShaneMathy911
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fdr8r
false
null
t3_16fdr8r
/r/LocalLLaMA/comments/16fdr8r/editing_specific_sections_of_documents/
false
false
self
1
null
Dynalang code released
25
Disclaimer: I'm not responsible for the code or paper. [Dynalang leverages diverse types of language to solve tasks by using language to predict the future via a multimodal world model.](https://reddit.com/link/16fd097/video/cwn9ovsq1inb1/player) Code: [https://github.com/jlin816/dynalang](https://github.com/jlin816/dynalang) Project: [https://dynalang.github.io/](https://dynalang.github.io/) Paper: [https://arxiv.org/abs/2308.01399](https://arxiv.org/abs/2308.01399)
2023-09-10T21:54:14
https://www.reddit.com/r/LocalLLaMA/comments/16fd097/dynalang_code_released/
ninjasaid13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fd097
false
null
t3_16fd097
/r/LocalLLaMA/comments/16fd097/dynalang_code_released/
false
false
https://b.thumbs.redditm…JUc7VU78aN_Q.jpg
25
{'enabled': False, 'images': [{'id': 'aNwPHA6U-XUNV4B2lVhRynEz0EOkPRSyPpKMyjvYkgY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KJbWui38p5IAiLEyD2bzkRGZ3N94KWEiMSRzGSuYH8w.jpg?width=108&crop=smart&auto=webp&s=6b43263969f6d66371d6aa191d926e3d111af291', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KJbWui38p5IAiLEyD2bzkRGZ3N94KWEiMSRzGSuYH8w.jpg?width=216&crop=smart&auto=webp&s=889be809d4f3a267d6ed7ea29d61b2e7f62d9e2c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KJbWui38p5IAiLEyD2bzkRGZ3N94KWEiMSRzGSuYH8w.jpg?width=320&crop=smart&auto=webp&s=cbaa30fd9d711fbf392e5f6d12780b9ee5c1e261', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KJbWui38p5IAiLEyD2bzkRGZ3N94KWEiMSRzGSuYH8w.jpg?width=640&crop=smart&auto=webp&s=8bbc34871f8f3a16657802d034c4a68ea9bb3da1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KJbWui38p5IAiLEyD2bzkRGZ3N94KWEiMSRzGSuYH8w.jpg?width=960&crop=smart&auto=webp&s=365cd2032766cc7cab76f5e153fc5db73bac21ac', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KJbWui38p5IAiLEyD2bzkRGZ3N94KWEiMSRzGSuYH8w.jpg?width=1080&crop=smart&auto=webp&s=8bb63d01facb849c4e396fb83f640f10423aec75', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KJbWui38p5IAiLEyD2bzkRGZ3N94KWEiMSRzGSuYH8w.jpg?auto=webp&s=1de1b8be5722b689b5ce7b49a6d7069d06e1f97c', 'width': 1200}, 'variants': {}}]}
Are there any graphics cards priced ≤ 300€ that offer good performance for Transformers LLM training and inference?
9
Are there any graphics cards priced ≤ 300€ that offer good performance for Transformers LLM training and inference? (Used would be totally ok too) I like to train small LLMs (3B, 7B, 13B). Already trained a few. But I want to get things running locally on my own GPU, so I decided to buy a GPU. Now I am looking around a bit. And I think the RTX 3060 12 GB is a great card for my budget. The prices for that card start from 278€ What do you think, is there a better option for my budget? How far is the ROCm technique? Would AMD cards also be an option? I use Arch Linux. Thank you.
2023-09-10T21:25:58
https://www.reddit.com/r/LocalLLaMA/comments/16fc9i6/are_there_any_graphics_cards_priced_300_that/
InternationalTeam921
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fc9i6
false
null
t3_16fc9i6
/r/LocalLLaMA/comments/16fc9i6/are_there_any_graphics_cards_priced_300_that/
false
false
self
9
null
KoboldCPP / LlamaCPP Frankensteined - Some Blast Batch Size tests
12
Some results with llamacpp-frankensteined\_b1204e\_b1209\_Cublas\_12.1\_bin (link at the bottom of the page), testing various values for **Blast Batch Size (BBS) :** --blasbatchsize x **Tests of VRAM occupation made on a RTX 3090 with full layers offload :** **For all :** llm\_load\_tensors: VRAM used: 16958 MB \+ **llama\_new\_context\_with\_model: kv self size = 1536.00 MB** (total : 18484, including the approx 480MB additional buffer populated when full context is reached, that's our baseline for all BBS values) \+ fixed blast buffer accordingly to BBS size. **BBS 2048** perplexity -m airoboros-c34b-2.1.Q3\_K\_M.gguf -f wiki.test.raw -ngl 100 -b 2048 -c 8192 --rope-freq-scale 1 --rope-freq-base 100000 --chunks 1 -> **22833 MB VRAM max -> ratio 100%** approx : 18010MB + 4288MB (fixed blast buffer) + 480MB (additional buffer created with 8192 processed tokens) llama\_print\_timings: prompt eval time = 31237.98 ms / 8192 tokens ( 3.81 ms per token, **262.24 tokens per second) -> ratio 100%** **BBS 1024** perplexity -m airoboros-c34b-2.1.Q3\_K\_M.gguf -f wiki.test.raw -ngl 100 -b 1024 -c 8192 --rope-freq-scale 1 --rope-freq-base 100000 --chunks 1 -> **20627 MB VRAM max -> ratio 90.3%** approx : 18010MB + 2144MB (fixed blast buffer) + 480MB (additional buffer created with 8192 processed tokens) llama\_print\_timings: prompt eval time = 31683.44 ms / 8192 tokens ( 3.87 ms per token, **258.56 tokens per second) -> ratio 98.6%** **BBS 512** perplexity -m airoboros-c34b-2.1.Q3\_K\_M.gguf -f wiki.test.raw -ngl 100 -b 512 -c 8192 --rope-freq-scale 1 --rope-freq-base 100000 --chunks 1 -> **19573 MB VRAM max -> ratio 85.7%** approx : 18010MB + 1072MB (fixed blast buffer) + 480MB (additional buffer created with 8192 processed tokens) llama\_print\_timings: prompt eval time = 34139.16 ms / 8192 tokens ( 4.17 ms per token, **239.96 tokens per second) -> ratio 91.5%** **BBS 256** perplexity -m airoboros-c34b-2.1.Q3\_K\_M.gguf -f wiki.test.raw -ngl 100 -b 256 -c 8192 --rope-freq-scale 1 --rope-freq-base 100000 --chunks 1 -> **19029 MB VRAM max -> ratio 83.3%** approx : 18010MB + 536MB (fixed blast buffer) + 480MB (additional buffer created with 8192 processed tokens) llama\_print\_timings: prompt eval time = 38949.93 ms / 8192 tokens ( 4.75 ms per token, **210.32 tokens per second) -> ratio 80.2%** **BBS 128** perplexity -m airoboros-c34b-2.1.Q3\_K\_M.gguf -f wiki.test.raw -ngl 100 -b 128 -c 8192 --rope-freq-scale 1 --rope-freq-base 100000 --chunks 1 -> **18743 MB VRAM max -> ratio 82.1%** approx : 18010MB + 268MB (fixed blast buffer) + 480MB (additional buffer created with 8192 processed tokens) llama\_print\_timings: prompt eval time = 45749.06 ms / 8192 tokens ( 5.58 ms per token, **179.06 tokens per second) -> ratio 68.3%** **BBS 64** perplexity -m airoboros-c34b-2.1.Q3\_K\_M.gguf -f wiki.test.raw -ngl 100 -b 64 -c 8192 --rope-freq-scale 1 --rope-freq-base 100000 --chunks 1 -> **18629 MB VRAM max -> ratio 81.6%** approx : 18010MB + 134MB (fixed blast buffer) + 480MB (additional buffer created with 8192 processed tokens) llama\_print\_timings: prompt eval time = 74474.66 ms / 8192 tokens ( 9.09 ms per token, **110.00 tokens per second) -> ratio 41.9%** **BBS 32** perplexity -m airoboros-c34b-2.1.Q3\_K\_M.gguf -f wiki.test.raw -ngl 100 -b 32 -c 8192 --rope-freq-scale 1 --rope-freq-base 100000 --chunks 1 -> **18555 MB VRAM max -> ratio 81.3%** approx : 18010MB + 67MB (fixed blast buffer) + 480MB (additional buffer created with 8192 processed tokens) llama\_print\_timings: prompt eval time = 137798.75 ms / 8192 tokens ( 16.82 ms per token, **59.45 tokens per second) -> ratio 22.7%** &#x200B; GPU-Z screenshot with the Frankenstein Llama version b1209 : https://preview.redd.it/0corjt147hnb1.png?width=1175&format=png&auto=webp&s=bdf859f745ea08857ec32d78ba44881848e74fc1 For info, screenshot with the official Llama version b1209 : https://preview.redd.it/t28vufbaehnb1.png?width=1215&format=png&auto=webp&s=355d94ac93806c6c438086615f5484cc3ab07b15 Observe the used memory growth curves along the processing of the context. The smaller the batch, the higher it goes. &#x200B; **Conclusion :** \- Works as intended, unlike the official releases since early august 2023 in which each batch weights more in VRAM than the previous one as they are processed (thus, OOM will happen faster on a lot of small batches than on a smaller number of bigger ones despite a smaller initial buffer for the smaller ones) \- The sweet BBS spots for most people are 512 or even 256, to limit the VRAM occupation without sacrificing much performances. 128 is still usable, and allows the most optimal context size. The performance collapse comes at 64 and of course 32 is worst, while 2048 is almost useless compared to 1024, the sweet spot for the speed freaks. \- Note : Q3\_K\_M is itself the best k\_quant for a low size allowing high context (once something goes out of context, it doesn't exist anymore and this perplexity is hence infinite) with an acceptable perplexity loss compared to f16. Resources : [https://www.reddit.com/r/LocalLLaMA/comments/16elfqa/for\_cuda\_mmq\_users\_on\_koboldcpp\_heres\_a\_fix\_to/](https://www.reddit.com/r/LocalLLaMA/comments/16elfqa/for_cuda_mmq_users_on_koboldcpp_heres_a_fix_to/) [https://github.com/Nexesenex/kobold.cpp/releases/tag/1.43.b1216](https://github.com/Nexesenex/kobold.cpp/releases/tag/1.43.b1216) &#x200B; Edit : The issue has been fixed in Llama CPP. As well as in KoboldCPP experimental. This release is now obsolete and offline.
2023-09-10T20:24:05
https://www.reddit.com/r/LocalLLaMA/comments/16famrm/koboldcpp_llamacpp_frankensteined_some_blast/
Nexesenex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16famrm
false
null
t3_16famrm
/r/LocalLLaMA/comments/16famrm/koboldcpp_llamacpp_frankensteined_some_blast/
false
false
https://a.thumbs.redditm…HuGKGLKK31g4.jpg
12
{'enabled': False, 'images': [{'id': 'sK7Y9HZpllGqqgn2-fKa0H1PoN54ZTwGMyuutmT7JtE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uRLtDGK633cK_iqcxs3EcyKCS7XXKQ_9Ox_5EQ9FnSI.jpg?width=108&crop=smart&auto=webp&s=5144621337daa51eb6689587ec5204b6cebe4017', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uRLtDGK633cK_iqcxs3EcyKCS7XXKQ_9Ox_5EQ9FnSI.jpg?width=216&crop=smart&auto=webp&s=d1bd5e252cd91bb832c7b62a11bc1e09cd7e340b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uRLtDGK633cK_iqcxs3EcyKCS7XXKQ_9Ox_5EQ9FnSI.jpg?width=320&crop=smart&auto=webp&s=d4df4266d22ab1aaedb5ad297e97c9f79034ed3b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uRLtDGK633cK_iqcxs3EcyKCS7XXKQ_9Ox_5EQ9FnSI.jpg?width=640&crop=smart&auto=webp&s=28e5f7a71542eeb7fa5c5b7a237bb667c9c1c394', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uRLtDGK633cK_iqcxs3EcyKCS7XXKQ_9Ox_5EQ9FnSI.jpg?width=960&crop=smart&auto=webp&s=33243b2ae0a783a6eacef9514d66cd95e0ee60ce', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uRLtDGK633cK_iqcxs3EcyKCS7XXKQ_9Ox_5EQ9FnSI.jpg?width=1080&crop=smart&auto=webp&s=ffeb80411333ec81eaef877bce21b27620de2ba1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uRLtDGK633cK_iqcxs3EcyKCS7XXKQ_9Ox_5EQ9FnSI.jpg?auto=webp&s=7d6b03e85f3b0b77837eb3dd25fe41c143971962', 'width': 1200}, 'variants': {}}]}
is this build good for ai?
2
Processor: Ryzen 9 7900X. Graphics card: NVIDIA founders RTX 3090. CPU cooler: Cooler Master Hyper 212 Black Edition. Motherboard: Gigabyte B650 Aorus Elite AX ATX AM5. SSD: Samsung 970 EVO Plus 2 TB. Case: Corsair D4000 airflow ATX mid Tower case. Power Supply: Corsair RM750e 750W.
2023-09-10T20:14:45
https://www.reddit.com/r/LocalLLaMA/comments/16fadzb/is_this_build_good_for_ai/
Many-Corner-6700
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16fadzb
false
null
t3_16fadzb
/r/LocalLLaMA/comments/16fadzb/is_this_build_good_for_ai/
false
false
self
2
null
What settings to load falcon-180b-chat.Q3_K_L.gguf in textgen webui with 2x 4090s and 64gb of ram?
2
I've only used gptq models and I can't get this working. Through playing around randomly I managed to load it with: n-gpu-layers 42 n\_ctx 2048 threads 16 (using 5950x) n\_batch 511 (I have no idea what this means) tensor split 23, 24 But when I try inferrence I get: `CUDA error 2 at D:\a\llama-cpp-python-cuBLAS-wheels\llama-cpp-python-cuBLAS-wheels\vendor\llama.cpp\`[`ggml-cuda.cu:5031`](https://ggml-cuda.cu:5031)`: out of memory` `C:\arrow\cpp\src\arrow\filesystem\`[`s3fs.cc:2829`](https://s3fs.cc:2829)`: arrow::fs::FinalizeS3 was not called even though S3 was initialized. This could lead to a segmentation fault at exit` Now I realize this means out of memory, though I'm not sure if it's talking about the ram or vram. Cuda sounds like vram, though I get the same error with less layers offloaded so I'm not sure. Do I just need to try a smaller quant? Or is there something in the textgen webui settings that would fix this? Like maybe the n\_batch? Or some other setting(s) I haven't even touched? I would like to at least see falcon 180B's quality before getting another 64gb of ram. Thanks to anyone who can help
2023-09-10T20:13:07
https://www.reddit.com/r/LocalLLaMA/comments/16facgu/what_settings_to_load_falcon180bchatq3_k_lgguf_in/
UnarmedPug
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16facgu
false
null
t3_16facgu
/r/LocalLLaMA/comments/16facgu/what_settings_to_load_falcon180bchatq3_k_lgguf_in/
false
false
self
2
null
Any problems with mixing GPUs? 4090+3090
1
I have a 4090 and want to expand to get 48GB of VRAM to run larger models. Is it a bad idea to pair it with a 3090? Will this give me trouble if I'm trying to do anything beyond inference like training? 3090s are cheaper and I saw on this forum someone mentioned that it works just fine. I also saw that pairing GPUs of different generations can be problematic for deep learning so I'm wondering if anyone has experience in this area
2023-09-10T19:23:32
https://www.reddit.com/r/LocalLLaMA/comments/16f91ql/any_problems_with_mixing_gpus_40903090/
yellowcustard77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16f91ql
false
null
t3_16f91ql
/r/LocalLLaMA/comments/16f91ql/any_problems_with_mixing_gpus_40903090/
false
false
self
1
null
Why are there no consumer grade ML GPU cards in between memory starved Gamer GPUs and superexpensive Enterprise GPUs?
142
Okay, unless I'm missing something you either get a gaming gpu thats limited to 24gb which is just fine for gamers but is a pathetic amount of memory to even do inference with all but the smallest models let alone train and has very limited to nonexistent capability to link together. Or you put on a fedora and trenchcoat and go ring up your black market contacts for the privilege of buying enterprise level gpu workstations for 10s of thousands which is so much of boon they don't even offer it to mere mortal consumers. Or to dig through the reseller market for a cast off slightly less expensive enterprise gpu someone is willing to part with. Would it be that difficult to introduce a midlevel 'professional' gpu where they just take a gamer gpu and stick some more memory on it and maybe also reintroduce a slightly pumped up nvlink which can handle more than 2 cards? Is anything like that in the pipes?
2023-09-10T18:43:52
https://www.reddit.com/r/LocalLLaMA/comments/16f7zb4/why_are_there_no_consumer_grade_ml_gpu_cards_in/
cyborgsnowflake
self.LocalLLaMA
1970-01-01T00:00:00
1
{'gid_2': 1}
16f7zb4
false
null
t3_16f7zb4
/r/LocalLLaMA/comments/16f7zb4/why_are_there_no_consumer_grade_ml_gpu_cards_in/
false
false
self
142
null
Is era of training models from scratch over
28
With era of all open source models and open source optimisation tools. Is era of training models over for mid level companies. Training huge models is now also becoming more and more expensive. ML is more about how to speed up and optimistize ml models for use cases. What do you guys think
2023-09-10T18:36:23
https://www.reddit.com/r/LocalLLaMA/comments/16f7tvu/is_era_of_training_models_from_scratch_over/
Spiritual-Rub925
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16f7tvu
false
null
t3_16f7tvu
/r/LocalLLaMA/comments/16f7tvu/is_era_of_training_models_from_scratch_over/
false
false
self
28
null
Local AI PC build advice
1
[removed]
2023-09-10T18:27:52
https://www.reddit.com/r/LocalLLaMA/comments/16f7lx3/local_ai_pc_build_advice/
Ewypig
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16f7lx3
false
null
t3_16f7lx3
/r/LocalLLaMA/comments/16f7lx3/local_ai_pc_build_advice/
false
false
self
1
null
Model/LoRA with up to date LLM knowledge
2
I have what I would consider basic questions about transformer models, and how various technologies interface with each other. I don't think these kinds of questions are valuable enough ask here, but I'm still curious. Has anyone trained/fine-tuned something that can answer these questions for me? I realize that state of the art and up to date are always going to be months behind, but from what I can tell, Llama based models are at least a year behind, and gpt4 is 2 years behind. If the model also had knowledge of the inner workings of stable diffusion, that would be a plus. I can run up to 70b models.
2023-09-10T18:06:23
https://www.reddit.com/r/LocalLLaMA/comments/16f71x9/modellora_with_up_to_date_llm_knowledge/
clyspe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
16f71x9
false
null
t3_16f71x9
/r/LocalLLaMA/comments/16f71x9/modellora_with_up_to_date_llm_knowledge/
false
false
self
2
null