name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_jfl4ana
Thanks for you answer, I has another question about how to convert web UI format to ggml?
1
0
2023-04-09T16:25:17
PeerXu
false
null
0
jfl4ana
false
/r/LocalLLaMA/comments/12alri3/difference_in_model_formats_how_to_tell_which/jfl4ana/
false
1
t1_jfl05yk
Someone is hosting Chavinlo/gpt4-x-alpaca there which is 8bit, at least to my knowledge. And the result is amazing. But I seriously doubt precision is the reason. It might contribute to the difference a tad bit, but that shouldn't be the main cause of such a dramatic quality gap between that model and my 4bit local inf...
1
0
2023-04-09T15:57:08
_Erilaz
false
2023-04-09T16:05:24
0
jfl05yk
false
/r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jfl05yk/
false
1
t1_jfkypd3
Generally yes I think so. The only time more is not better is when it is hallucinating a bunch of random stuff, but at that point you can usually abort it assuming you are not using a platform that hides end result until the entire thing is typed out like some chat simulators.
1
0
2023-04-09T15:47:03
ThePseudoMcCoy
false
null
0
jfkypd3
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jfkypd3/
false
1
t1_jfkwbwb
What do you mean by lower precision? Are you using the 8/16 bit version in Kobold? And yeah, I don't use the chat mode at all (I only set up Ooba), because it just made a lot of problems. But I didn't really invest much time in it, partly because the chat history takes a lot of vram and I don't want to offload so much...
1
0
2023-04-09T15:30:14
the_real_NordVPN
false
null
0
jfkwbwb
false
/r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jfkwbwb/
false
1
t1_jfkw5e8
Thanks, I presume more is better, but at the expense of speed?
1
0
2023-04-09T15:28:57
Useful-Command-8793
false
null
0
jfkw5e8
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jfkw5e8/
false
1
t1_jfkvlec
That's such a cool test bed! I've got it grabbing the data right now to replicate and play around with it. Thank you so much for documenting the whole process too. The data collecting and formatting was the first thing that caught my eye too. That's so facinating that it's working fine with it!
5
0
2023-04-09T15:25:02
toothpastespiders
false
null
0
jfkvlec
false
/r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfkvlec/
false
5
t1_jfkuhh0
Number of tokens to predict. Ive played with different values. I forget what I use now.
1
0
2023-04-09T15:17:12
ThePseudoMcCoy
false
2023-04-09T15:22:46
0
jfkuhh0
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jfkuhh0/
false
1
t1_jfktxa6
[https://github.com/kbressem/medAlpaca](https://github.com/kbressem/medAlpaca)
9
0
2023-04-09T15:13:08
3deal
false
null
0
jfktxa6
false
/r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfktxa6/
false
9
t1_jfktj2n
I use a similar setup, but faced an issue: whenever I try to create a character, the model takes the context very randomly and starts to hallucinate a lot. I am sure I am missing something, I just can't understand what exactly. Because the Horde edition of GTP4-X-Alpaca works wonders with the same parameters in Kobold...
1
0
2023-04-09T15:10:16
_Erilaz
false
null
0
jfktj2n
false
/r/LocalLLaMA/comments/129y09a/gpt4xalpaca_writes_an_ad_for_raid_shadow_legends/jfktj2n/
false
1
t1_jfktgn7
Do you have a link for the 8bit one please?
2
0
2023-04-09T15:09:47
Useful-Command-8793
false
null
0
jfktgn7
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jfktgn7/
false
2
t1_jfktaes
Me too, glad you had that as well. I wasn't sure if it was the model or ooba
1
0
2023-04-09T15:08:33
Useful-Command-8793
false
null
0
jfktaes
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jfktaes/
false
1
t1_jfkt3d9
What does the -n option do? I see many variations of this value
1
0
2023-04-09T15:07:08
Useful-Command-8793
false
null
0
jfkt3d9
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jfkt3d9/
false
1
t1_jfksvop
Can I ask what the difference between 4 and 8 bit is please? Also what is the "native" bit the original model has if you know. Really struggling to digest all this. Apologies if that's a basic question
2
0
2023-04-09T15:05:37
Useful-Command-8793
false
null
0
jfksvop
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jfksvop/
false
2
t1_jfks61o
I'm also trying to get results with fine tuning right now. You can use this script to bring the text into a better form: [https://github.com/dynamiccreator/lora\_scripts/blob/main/create-data-set-txt2txt.py](https://github.com/dynamiccreator/lora_scripts/blob/main/create-data-set-txt2txt.py) Should reduce your trainin...
20
0
2023-04-09T15:00:26
Ok-Scarcity-7875
false
2023-04-09T18:01:11
0
jfks61o
false
/r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfks61o/
false
20
t1_jfknpvr
That's 10/10. Let's do this with medical documentation to create a WATSON for everyone aka The Doctor from STV.
12
0
2023-04-09T14:27:12
Mysterious_Ayytee
false
null
0
jfknpvr
false
/r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfknpvr/
false
12
t1_jfkjza4
Have you left --ignore-eos in by mistake?
3
0
2023-04-09T13:57:56
ambient_temp_xeno
false
null
0
jfkjza4
false
/r/LocalLLaMA/comments/12ezcly/comparing_models_gpt4xalpaca_vicuna_and_oasst/jfkjza4/
false
3
t1_jfkizqo
I'd like 33b GPT4xAlpaca [edit: looks like that's never going to happen now]. Interestingly, on llama.cpp GPT4xAlpaca 13 q4_1 128g seems to run about the same speed for me as 33b alpaca lora merged, for whatever that's worth. Flowery, poetic prose has its place but overusing it might make it a bit empty and meaningles...
2
0
2023-04-09T13:50:02
ambient_temp_xeno
false
2023-04-09T15:54:25
0
jfkizqo
false
/r/LocalLLaMA/comments/12ezcly/comparing_models_gpt4xalpaca_vicuna_and_oasst/jfkizqo/
false
2
t1_jfkgz0i
That’s correct! I was taken off guard when I saw that it was working reasonably well, the text file formatting is messy at best.
16
0
2023-04-09T13:33:11
Bublint
false
null
0
jfkgz0i
false
/r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfkgz0i/
false
16
t1_jfkgwon
So I've been setting this up and tbh the cost of renting a dedicated server for inference, even assuming you maxed it out, is only about 50% cheaper than GPT3.5-Turbo. And that's assuming the server is 100% maxed out for the entire month. And that's with the most cost efficient Hetzner server which is the 80-core ARM ...
1
0
2023-04-09T13:32:36
Pan000
false
null
0
jfkgwon
false
/r/LocalLLaMA/comments/12fvq3t/does_anyone_want_to_split_a_dedicated_server_for/jfkgwon/
false
1
t1_jfkgmry
[deleted]
21
0
2023-04-09T13:30:11
[deleted]
true
null
0
jfkgmry
false
/r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/jfkgmry/
false
21
t1_jfkfh0b
[removed]
1
0
2023-04-09T13:20:00
[deleted]
true
null
0
jfkfh0b
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jfkfh0b/
false
1
t1_jfkefrs
I'm curious if I'm doing something wrong. With a 7b model at 4bits on a 2080Ti, 11Gb, I get between 4 and 8 tokens per second. I'm using oobabooga's text-generation-webui in streaming mode. Is there a better alternative with faster speeds? I read in the code that the streaming works by generating 8 tokens at a time...
1
0
2023-04-09T13:10:42
NekoSmoothii
false
null
0
jfkefrs
false
/r/LocalLLaMA/comments/12fvq3t/does_anyone_want_to_split_a_dedicated_server_for/jfkefrs/
false
1
t1_jfkcz7b
I don't know if there is, sorry.
1
0
2023-04-09T12:57:05
Art10001
false
null
0
jfkcz7b
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jfkcz7b/
false
1
t1_jfk78aw
You definitely need to try the 4bit quantized models, i can run the 13b locally using a rtx 3060
1
0
2023-04-09T11:57:30
Feeling-Currency-360
false
null
0
jfk78aw
false
/r/LocalLLaMA/comments/12fvq3t/does_anyone_want_to_split_a_dedicated_server_for/jfk78aw/
false
1
t1_jfjz940
Another perspective is to consider human creativity not that diverse and rich.
1
0
2023-04-09T10:14:18
aleph02
false
null
0
jfjz940
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfjz940/
false
1
t1_jfjyflw
Maybe something is wrong with the context settings? I had similar problems with the alpaca chat last month.
1
0
2023-04-09T10:02:32
SeymourBits
false
null
0
jfjyflw
false
/r/LocalLLaMA/comments/12g1kb2/alpaca_30b_i_discussing_its_situation_experiences/jfjyflw/
false
1
t1_jfjxub4
That is so weird. Somehow it is not detecting the dll file. Can you verify if the path listed in the error contains the dll with the correct filename? Could it be blocked by some other program on your PC?
1
0
2023-04-09T09:53:41
HadesThrowaway
false
null
0
jfjxub4
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfjxub4/
false
1
t1_jfjuv09
[deleted]
1
0
2023-04-09T09:09:03
[deleted]
true
2023-04-09T09:14:19
0
jfjuv09
false
/r/LocalLLaMA/comments/12g5lqu/are_there_any_goto_google_colabs_for_finetuning/jfjuv09/
false
1
t1_jfjt0jz
changing the format to or from the standford one is child's play, and editing the code for custom data doesnt take long. That's how I've been doing it so far with 7b colab, but I just dont have a colab that allows for training the llama models larger than 7b. I'm fine with one that's just setup for the alpaca format ag...
2
0
2023-04-09T08:41:57
Sixhaunt
false
null
0
jfjt0jz
false
/r/LocalLLaMA/comments/12g5lqu/are_there_any_goto_google_colabs_for_finetuning/jfjt0jz/
false
2
t1_jfjrs4y
[deleted]
1
0
2023-04-09T08:24:16
[deleted]
true
null
0
jfjrs4y
false
/r/LocalLLaMA/comments/12ezcly/comparing_models_gpt4xalpaca_vicuna_and_oasst/jfjrs4y/
false
1
t1_jfjrg0w
Please, please write down your experiences if you will find a decent one. Within a month or so I will want to fine-tune a 13B model, and right now what I'm seeing is a lot of tutorials - but almost of all of them assuming people just use local programs and stanford dataset format.
5
0
2023-04-09T08:19:35
szopen76
false
null
0
jfjrg0w
false
/r/LocalLLaMA/comments/12g5lqu/are_there_any_goto_google_colabs_for_finetuning/jfjrg0w/
false
5
t1_jfjo5yb
[A link to for y'all](https://github.com/THUDM/ChatGLM-6B/blob/main/README_en.md). Definitely gonna try to mess around with this!
4
0
2023-04-09T07:33:59
Captain_Pumpkinhead
false
null
0
jfjo5yb
false
/r/LocalLLaMA/comments/12fwygw/coding_llama_modell/jfjo5yb/
false
4
t1_jfjj9qu
yup GPT4 x Alpaca is filterless and is there GPT4 x Alpaca 7B model? it is hard to run 13B model on my PC. it is so slow.
3
0
2023-04-09T06:29:33
tamal4444
false
null
0
jfjj9qu
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jfjj9qu/
false
3
t1_jfjj8oz
Check out the hourly rental prices at vast.ai and runpod.ai if you need a bigger setup for one-off tasks like finetuning.
3
0
2023-04-09T06:29:11
Zyj
false
null
0
jfjj8oz
false
/r/LocalLLaMA/comments/12fvq3t/does_anyone_want_to_split_a_dedicated_server_for/jfjj8oz/
false
3
t1_jfji8jx
There are. However, they may be different. I heard GPT4 x Alpaca is filterless. Most of the filtered models can be jailbroken.
2
0
2023-04-09T06:16:20
Art10001
false
null
0
jfji8jx
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jfji8jx/
false
2
t1_jfjhmwq
Nvidia has announced some insane stuff coming up for their AI hardware too
2
0
2023-04-09T06:08:56
Sixhaunt
false
null
0
jfjhmwq
false
/r/LocalLLaMA/comments/12fvq3t/does_anyone_want_to_split_a_dedicated_server_for/jfjhmwq/
false
2
t1_jfjgcd0
>Many people think of statistics as simply a way to describe random events or data Because they're taught that. In 7/8th grade or so I was told "Statistics is the discipline of measuring data, randomness, probability..." the textbook displayed basic examples such as die throws and coin flips. Education must be remade...
0
0
2023-04-09T05:53:11
Art10001
false
null
0
jfjgcd0
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfjgcd0/
false
0
t1_jfjfu5n
They are actually numbers inside.
1
0
2023-04-09T05:47:19
Art10001
false
null
0
jfjfu5n
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfjfu5n/
false
1
t1_jfjfnqx
What inference code? Llama.cpp? For me the back-and-forth never worked..
4
0
2023-04-09T05:45:12
bitdotben
false
null
0
jfjfnqx
false
/r/LocalLLaMA/comments/12g1kb2/alpaca_30b_i_discussing_its_situation_experiences/jfjfnqx/
false
4
t1_jfjfb8s
[The Human Brain Can Create Structures in Up to 11 Dimensions](https://www.sciencealert.com/science-discovers-human-brain-works-up-to-11-dimensions)
1
0
2023-04-09T05:41:11
Art10001
false
null
0
jfjfb8s
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfjfb8s/
false
1
t1_jfjf29n
Don't forget Encarta.
5
0
2023-04-09T05:38:24
Art10001
false
null
0
jfjf29n
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfjf29n/
false
5
t1_jfjeriw
By the way, Apple's M1 has a memory bandwidth of 200 GB/s, compared to PCIe 5's 60 GB/s. And it has shared vRAM and RAM. Newer M series are even faster. I can't wait until people use it for AI and publicize this more, as well as the [Neural Engine](https://github.com/apple/ml-ane-transformers) with its 16 cores and t...
3
0
2023-04-09T05:35:02
Art10001
false
null
0
jfjeriw
false
/r/LocalLLaMA/comments/12fvq3t/does_anyone_want_to_split_a_dedicated_server_for/jfjeriw/
false
3
t1_jfjdvgd
I think doomer brainwashing from too many hollywood movies is the reason behind that, as well as alarmists such as the insane Yudkowsky. People are brainwashed and think AGI is evil instead of good. /u/Wonderful_Ad_5134
4
0
2023-04-09T05:25:09
Art10001
false
null
0
jfjdvgd
false
/r/LocalLLaMA/comments/12ezcly/comparing_models_gpt4xalpaca_vicuna_and_oasst/jfjdvgd/
false
4
t1_jfjatfd
Thanks!
1
0
2023-04-09T04:53:53
disarmyouwitha
false
null
0
jfjatfd
false
/r/LocalLLaMA/comments/12g5lqu/are_there_any_goto_google_colabs_for_finetuning/jfjatfd/
false
1
t1_jfj95c1
Just like the (1.5) Stable Diffusion model is \~4GB and can 'create' all sorts of images you can imagine.
1
0
2023-04-09T04:36:49
sEi_
false
null
0
jfj95c1
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfj95c1/
false
1
t1_jfj8tfo
That can work. The better method of doing it is to start the response with at least one sentence of what you're looking to get out of it. A couple of words can be enough too if it points it in the right direction.
2
0
2023-04-09T04:33:35
Civil_Collection7267
false
null
0
jfj8tfo
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfj8tfo/
false
2
t1_jfj5vbl
[https://colab.research.google.com/drive/1rqWABmz2ZfolJOdoy6TRc6YI7d128cQO#scrollTo=OdgRTo5YxyRL](https://colab.research.google.com/drive/1rqWABmz2ZfolJOdoy6TRc6YI7d128cQO#scrollTo=OdgRTo5YxyRL)
5
0
2023-04-09T04:05:00
Sixhaunt
false
null
0
jfj5vbl
false
/r/LocalLLaMA/comments/12g5lqu/are_there_any_goto_google_colabs_for_finetuning/jfj5vbl/
false
5
t1_jfj4ou5
What was the collab you used for 7b ? =]
3
0
2023-04-09T03:53:29
disarmyouwitha
false
null
0
jfj4ou5
false
/r/LocalLLaMA/comments/12g5lqu/are_there_any_goto_google_colabs_for_finetuning/jfj4ou5/
false
3
t1_jfj0rr6
Apparently ChatGLM 6B is the best so far to use for coding. Not sure tho cannot get the webui working well atm
3
0
2023-04-09T03:18:04
ihaag
false
null
0
jfj0rr6
false
/r/LocalLLaMA/comments/12fwygw/coding_llama_modell/jfj0rr6/
false
3
t1_jfizn0j
I'm on windows 10 and I'm not receiving any error messages.
1
0
2023-04-09T03:07:49
TheAccountToBeThrown
false
null
0
jfizn0j
false
/r/LocalLLaMA/comments/12g5wcc/cant_type_when_using_llamacpp/jfizn0j/
false
1
t1_jfizjzs
There really isn't enough information here to provide an answer. Are you running it on Linux, Windows, WSL? Are you receiving any error messages? Etc? You can also try asking in the [Discussions tab](https://github.com/ggerganov/llama.cpp/discussions) on their GitHub.
1
0
2023-04-09T03:07:03
Civil_Collection7267
false
null
0
jfizjzs
false
/r/LocalLLaMA/comments/12g5wcc/cant_type_when_using_llamacpp/jfizjzs/
true
1
t1_jfirc4v
That's why I wish someone would sell API access already. Even OpenAI's per token rates would be reasonable to me and we already know the computing resources to run llama are much more modest. But of course licensing issues prevent that.
1
0
2023-04-09T01:57:05
deccan2008
false
null
0
jfirc4v
false
/r/LocalLLaMA/comments/12fvq3t/does_anyone_want_to_split_a_dedicated_server_for/jfirc4v/
false
1
t1_jfinpt1
[deleted]
1
0
2023-04-09T01:27:01
[deleted]
true
null
0
jfinpt1
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jfinpt1/
false
1
t1_jfil980
It's part of the prompt.
3
0
2023-04-09T01:06:25
2muchnet42day
false
null
0
jfil980
false
/r/LocalLLaMA/comments/12g1kb2/alpaca_30b_i_discussing_its_situation_experiences/jfil980/
false
3
t1_jfiiygn
Word, well.. I watch this page like a hawk so hopefully I'll see it! Sounds like exactly the kind of training data I'd like to see =\]
2
0
2023-04-09T00:47:45
disarmyouwitha
false
null
0
jfiiygn
false
/r/LocalLLaMA/comments/12d5sxb/simple_token_counter_on_huggingface/jfiiygn/
false
2
t1_jfiipnd
Something must be wrong here then, because I've already spent half a day downloading the model. Could it be the Chrome or something?
1
0
2023-04-09T00:45:44
Rare_Yam9364
false
null
0
jfiipnd
false
/r/LocalLLaMA/comments/12g2qd6/why_do_the_hugging_face_downloads_are_so_slow/jfiipnd/
false
1
t1_jfii8nc
I've had that problem before but the speed usually stabilizes and saturates the connection after some time, could be several minutes or more. You should also check that you're getting the right download speed from other services or websites.
1
0
2023-04-09T00:41:55
Civil_Collection7267
false
null
0
jfii8nc
false
/r/LocalLLaMA/comments/12g2qd6/why_do_the_hugging_face_downloads_are_so_slow/jfii8nc/
true
1
t1_jfihk2f
Not yet, I just had 3.5-turbo redo all the answers but now I'm going over all of the answers to remove ones that have answers like "as an assistant", "as an language model", etc... and have it retry with those until it's all got better outputs. So I'm pretty close to being done with it but I'm not 100% sure what I'll n...
2
0
2023-04-09T00:36:37
Sixhaunt
false
null
0
jfihk2f
false
/r/LocalLLaMA/comments/12d5sxb/simple_token_counter_on_huggingface/jfihk2f/
false
2
t1_jfihgme
It's not a general problem. I'm downloading something right now from HF at about 9MiB/sec (which is saturating my connection).
1
0
2023-04-09T00:35:51
KerfuffleV2
false
null
0
jfihgme
false
/r/LocalLLaMA/comments/12g2qd6/why_do_the_hugging_face_downloads_are_so_slow/jfihgme/
false
1
t1_jfifzm0
How do you make it remember the previous answers?
4
0
2023-04-09T00:24:16
Christ0ph_
false
null
0
jfifzm0
false
/r/LocalLLaMA/comments/12g1kb2/alpaca_30b_i_discussing_its_situation_experiences/jfifzm0/
false
4
t1_jfibrur
I liked it, it was an interesting read =]
7
0
2023-04-08T23:50:41
disarmyouwitha
false
null
0
jfibrur
false
/r/LocalLLaMA/comments/12g1kb2/alpaca_30b_i_discussing_its_situation_experiences/jfibrur/
false
7
t1_jfib5i1
This sounds very interesting — any place I can keep my eye on for updates?
1
0
2023-04-08T23:45:44
disarmyouwitha
false
null
0
jfib5i1
false
/r/LocalLLaMA/comments/12d5sxb/simple_token_counter_on_huggingface/jfib5i1/
false
1
t1_jfiajsm
I am redoing the outputs for llama using GPT3.5-turbo since it was originally done with an older version. I think my outputs go up to around 1,500 tokens but I'll check the max length when I'm completely done. Should be a significant boost over alpaca both in terms of text length and quality, especially since I used th...
1
0
2023-04-08T23:40:54
Sixhaunt
false
null
0
jfiajsm
false
/r/LocalLLaMA/comments/12d5sxb/simple_token_counter_on_huggingface/jfiajsm/
false
1
t1_jfi9wbv
having a separate summary lora that takes the oldest input and condenses it down the just the required information has been something I see others doing
4
0
2023-04-08T23:35:42
Sixhaunt
false
null
0
jfi9wbv
false
/r/LocalLLaMA/comments/12dxl2j/adultthemed_stories_with_llama/jfi9wbv/
false
4
t1_jfi9l10
I'm already well into creating the dataset for the "choose your own adventure" games so if there's not already a UI and stuff for it by the time my dataset&loras are ready and trained, then I'll just do it myself. The only large database of choose-your-own-adventure games that I could find is almost entirely Adult-Them...
4
0
2023-04-08T23:33:11
Sixhaunt
false
null
0
jfi9l10
false
/r/LocalLLaMA/comments/12dxl2j/adultthemed_stories_with_llama/jfi9l10/
false
4
t1_jfi8bgb
[deleted]
1
0
2023-04-08T23:23:07
[deleted]
true
null
0
jfi8bgb
false
/r/LocalLLaMA/comments/12g1kb2/alpaca_30b_i_discussing_its_situation_experiences/jfi8bgb/
false
1
t1_jfi54ud
There's [also this list](https://github.com/underlines/awesome-marketing-datascience/blob/master/awesome-ai.md), if you wish to compare. This stuff is changing by the day though, so it's hard to stay on top of.
1
0
2023-04-08T22:57:54
SquareWheel
false
null
0
jfi54ud
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfi54ud/
false
1
t1_jfi3e18
Thanks again for helping me to get this working. I ran the .py file and it crashed with the following error while trying to initialize koboldcpp.dll. I tried running it with and without the --noblas flag. \[NOTE: I can run small models on my GPU without issue, like Pyg 1.3B with TavernAI/KoboldAI, so it's odd that ...
1
0
2023-04-08T22:45:06
Daydreamer6t6
false
2023-04-09T01:16:43
0
jfi3e18
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfi3e18/
false
1
t1_jfhy6bn
I guess you can connect SillyTavern to the Koboldcpp endpoint and use the SillyTavern extension for that.
2
0
2023-04-08T22:05:04
aka457
false
null
0
jfhy6bn
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfhy6bn/
false
2
t1_jfhxvf8
Oh wow, that was literally 1 click compared to the fuckingaround I've been through for the past month... Do you know if the koboldcpp performance is the same/similar as llamacpp? I seem to crash when connecting to koboldcpp from tavern for some reason, but I'll try figuring that out. \^ had to update server.js for s...
2
0
2023-04-08T22:02:44
illyaeater
false
2023-04-08T23:12:08
0
jfhxvf8
false
/r/LocalLLaMA/comments/12ezcly/comparing_models_gpt4xalpaca_vicuna_and_oasst/jfhxvf8/
false
2
t1_jfhxara
e possible way to get around the token limit would be supplying context. For example including in the context: \- application architecture \- relation to other modules, doc strings for functions that it needs to call \- current task to solve for ​ And something like task planning from autoAGI, where it's de...
3
0
2023-04-08T21:58:21
wind_dude
false
null
0
jfhxara
false
/r/LocalLLaMA/comments/12fwygw/coding_llama_modell/jfhxara/
false
3
t1_jfhuf3m
[This is amongst the best porn I have ever read](https://discordapp.com/channels/1089972953506123937/1093565361208700999/1094374959985467392), and I only wrote the first three paragraphs.
4
0
2023-04-08T21:36:51
PiquantAnt
false
null
0
jfhuf3m
false
/r/LocalLLaMA/comments/12dxl2j/adultthemed_stories_with_llama/jfhuf3m/
false
4
t1_jfhtnmx
I run the 7B models on GPU with oobabooga's textgen and the 13B on CPU with [koboldcpp](https://github.com/LostRuins/koboldcpp). The configuration is the same because I let TavernAI handle that, it can override the individual backends' configurations.
2
0
2023-04-08T21:31:14
WolframRavenwolf
false
null
0
jfhtnmx
false
/r/LocalLLaMA/comments/12ezcly/comparing_models_gpt4xalpaca_vicuna_and_oasst/jfhtnmx/
false
2
t1_jfhsb3i
[removed]
1
0
2023-04-08T21:21:13
[deleted]
true
null
0
jfhsb3i
false
/r/LocalLLaMA/comments/12bqai6/script_to_automatically_update_llamacpp_to_newest/jfhsb3i/
false
1
t1_jfhpqmt
I've been using ooba webui as well for Chatting, guess I'll look into tavernai, thanks. Although currently I'm waiting for the ggml models to run properly on his stuff so I can run shit on my cpu with the same configuration
1
0
2023-04-08T21:02:06
illyaeater
false
null
0
jfhpqmt
false
/r/LocalLLaMA/comments/12ezcly/comparing_models_gpt4xalpaca_vicuna_and_oasst/jfhpqmt/
false
1
t1_jfhp48u
Reading speed. About 10 tokens per second.
3
0
2023-04-08T20:57:31
teachersecret
false
null
0
jfhp48u
false
/r/LocalLLaMA/comments/12fvq3t/does_anyone_want_to_split_a_dedicated_server_for/jfhp48u/
false
3
t1_jfhoxh3
Well, it was a large amount, but even back then it was practical to 'fill'. E.g. I wrote a program in Basic that had to be split into 3 or 4 programs that had to load each other from floppy disk to run in a 48K Apple II's memory. (A drawing program, an automatic 'hires' screen image to sprite converter, and a scree...
2
0
2023-04-08T20:56:08
No_Opposite_4334
false
null
0
jfhoxh3
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfhoxh3/
false
2
t1_jfhnc7s
What’s the approximate inference speed on that?
1
0
2023-04-08T20:44:35
oscarpildez
false
null
0
jfhnc7s
false
/r/LocalLLaMA/comments/12fvq3t/does_anyone_want_to_split_a_dedicated_server_for/jfhnc7s/
false
1
t1_jfhn23q
Just build a rig for it. A 3060 12gb is like $320 and can run 13b just fine. slap that card into almost any computer built in the last ten years and you're done.
2
0
2023-04-08T20:42:32
teachersecret
false
null
0
jfhn23q
false
/r/LocalLLaMA/comments/12fvq3t/does_anyone_want_to_split_a_dedicated_server_for/jfhn23q/
false
2
t1_jfhms06
There's already a version of GPT4 with a 32k token limit: https://help.openai.com/en/articles/7127966-what-is-the-difference-between-the-gpt-4-models Not sure it's 100% available via the API for everyone, but it probably won't be too long even if not. Of course, you'll have to pay for those API calls. (I think all mod...
3
0
2023-04-08T20:40:31
KerfuffleV2
false
null
0
jfhms06
false
/r/LocalLLaMA/comments/12fwygw/coding_llama_modell/jfhms06/
false
3
t1_jfhlr76
But i heard about API Pricing is rapidly going down in the future so it doesnt take too long that it could be realisable is'nt it? But wait, you said that there is still no local running Modell without 8k max Output :i slowly i understand...
1
0
2023-04-08T20:32:58
StressEmpty
false
null
0
jfhlr76
false
/r/LocalLLaMA/comments/12fwygw/coding_llama_modell/jfhlr76/
false
1
t1_jfhl8ap
We can hope, but training high quality language models is currently a massive (and expensive) undertaking. That's why basically all the local models are based on fine-tuning llama.
3
0
2023-04-08T20:29:02
KerfuffleV2
false
null
0
jfhl8ap
false
/r/LocalLLaMA/comments/12fwygw/coding_llama_modell/jfhl8ap/
false
3
t1_jfhkwn4
Sad to hear that, do you think that something similiar will getting released in the future?
1
0
2023-04-08T20:26:35
StressEmpty
false
null
0
jfhkwn4
false
/r/LocalLLaMA/comments/12fwygw/coding_llama_modell/jfhkwn4/
false
1
t1_jfhk4ke
llama has a context limit of 2,048 so it's going to be much more limited in that respect than ChatGPT. As far as I know, none of the models that can be run locally at the moment have a higher context limit except for RWKV (and it's dubious how much of that it can make use of). Also, I don't think there are RWKV models ...
7
0
2023-04-08T20:20:53
KerfuffleV2
false
null
0
jfhk4ke
false
/r/LocalLLaMA/comments/12fwygw/coding_llama_modell/jfhk4ke/
false
7
t1_jfhjrif
> but I'm not really familiar with the data format the models weights have. Tensors (basically arrays in various dimensions) of floating point numbers. Knowing that doesn't tell you anything about how the information in the model is actually organized or compressed though. Because of how the models are trained, no o...
1
0
2023-04-08T20:18:09
KerfuffleV2
false
null
0
jfhjrif
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfhjrif/
false
1
t1_jfhjr9x
Would be awesome!
2
0
2023-04-08T20:18:05
ChoiceOwn555
false
null
0
jfhjr9x
false
/r/LocalLLaMA/comments/12fwygw/coding_llama_modell/jfhjr9x/
false
2
t1_jfhj5ml
I'm just saying that if you're going to pay for a system to run this, you could be running it off the GPU so it runs at a reasonable speed. Renting a machine to run it on CPU seems like a waste and at that point most people can just run it at home. Having more RAM isn't going to make much difference. What you want is a...
9
0
2023-04-08T20:13:40
Sixhaunt
false
null
0
jfhj5ml
false
/r/LocalLLaMA/comments/12fvq3t/does_anyone_want_to_split_a_dedicated_server_for/jfhj5ml/
false
9
t1_jfhi673
I’m open to alternative suggestions? Should I bite the bullet and buy a setup?
2
0
2023-04-08T20:06:21
oscarpildez
false
null
0
jfhi673
false
/r/LocalLLaMA/comments/12fvq3t/does_anyone_want_to_split_a_dedicated_server_for/jfhi673/
false
2
t1_jfhhzzj
Why rent a machine if you're just planning to run it off the CPU anyway?
1
0
2023-04-08T20:05:07
Sixhaunt
false
null
0
jfhhzzj
false
/r/LocalLLaMA/comments/12fvq3t/does_anyone_want_to_split_a_dedicated_server_for/jfhhzzj/
false
1
t1_jfhaokx
A new model would need to be trained on a dataset with all of the problematic parts removed. There's a pruned [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) dataset that a new Vicuna could be trained on, but so far no one has stepped up to fill that role.
5
0
2023-04-08T19:11:00
Civil_Collection7267
false
null
0
jfhaokx
false
/r/LocalLLaMA/comments/12ezcly/comparing_models_gpt4xalpaca_vicuna_and_oasst/jfhaokx/
false
5
t1_jfh7nxg
You can prompt Vicuna in a way those words can't be said. The model, with duct tape removed, is the best local model I have use for brainstorming.
2
0
2023-04-08T18:51:16
Nearby_Yam286
false
null
0
jfh7nxg
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfh7nxg/
false
2
t1_jfh14lc
[deleted]
1
0
2023-04-08T18:08:38
[deleted]
true
2023-04-09T20:47:05
0
jfh14lc
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfh14lc/
false
1
t1_jfh0r1u
The bit about restricting yourself to 65k English words is important. You don’t need to store each character if you just use a lookup table. So 0 = the, 1 = alpaca etc…. My understanding is that that’s not quite how GPT tokenization works. The tokens are mapped to numbers in the 16 bit space but the tokens themselve...
2
0
2023-04-08T18:06:15
UnlikelyEpigraph
false
null
0
jfh0r1u
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfh0r1u/
false
2
t1_jfh089f
That's helpful to know. Is there any setting to make it use 100% of the CPU power? None of my hardware shows as being bottlenecked but it only processes with 50-60% CPU power.
2
0
2023-04-08T18:02:49
ZimsZee
false
null
0
jfh089f
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfh089f/
false
2
t1_jfh089l
Patterns. Also, Imagine a god-like AI contained in 4gb.
1
0
2023-04-08T18:02:49
planetoryd
false
null
0
jfh089l
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfh089l/
false
1
t1_jfgzbc0
I currently have no goals making this into a certified product. Besides the LLaMA license would not allow this. But I am sure there will be companies producing similar LLM solutions for healthcare very soon.
1
0
2023-04-08T17:56:21
SessionComplete2334
false
null
0
jfgzbc0
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jfgzbc0/
false
1
t1_jfgxzyt
Excellent I will keep an eye. Is there any plan for a professional product such as this in the future. I would definitely pay for a license for something like this if reliability and privacy issues were full addressed.
1
0
2023-04-08T17:47:15
Yahakshan
false
null
0
jfgxzyt
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jfgxzyt/
false
1
t1_jfgxmj7
Sorry not at the moment. We will probably add the once the other models perform satisfactory.
1
0
2023-04-08T17:44:40
SessionComplete2334
false
null
0
jfgxmj7
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jfgxmj7/
false
1
t1_jfgx8m1
Do you have a 7b 4bit version?
2
0
2023-04-08T17:41:55
Yahakshan
false
null
0
jfgx8m1
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jfgx8m1/
false
2
t1_jfgwoau
How do you get it to make images at all? I have only just gotten 7b 4bit alpaca to work (13b was way too unstable on my 1080ti)
1
0
2023-04-08T17:37:59
Yahakshan
false
null
0
jfgwoau
false
/r/LocalLLaMA/comments/11wwwjq/graphic_text_adventure_game_locally_with_llama/jfgwoau/
false
1