name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_jfdb8yx
As a story writer, I agree with you, gpt4-x-alpaca is the only one that give me this "woahh there's no way I could write something this beautiful" reaction. Those flowery languages are the product of gpt4's dataset, gpt4's writing is so good even when telling you simple stuff and it shows! And let's not forget about ...
29
0
2023-04-07T21:29:34
Wonderful_Ad_5134
false
2023-04-07T23:07:13
0
jfdb8yx
false
/r/LocalLLaMA/comments/12ezcly/comparing_models_gpt4xalpaca_vicuna_and_oasst/jfdb8yx/
false
29
t1_jfdb693
you'd hope it would, but whatever settings I've got in alpaca 33b make it insist it's not lying whenever I've confronted it.
1
0
2023-04-07T21:29:02
ambient_temp_xeno
false
null
0
jfdb693
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfdb693/
false
1
t1_jfdaomw
Can’t it just say it isn’t certain?
2
0
2023-04-07T21:25:32
Necessary_Ad_9800
false
null
0
jfdaomw
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfdaomw/
false
2
t1_jfda2sc
That's the beauty of representatives compression in general, but with transformers it's just extra amazing because of the sheer scale of it.
4
0
2023-04-07T21:21:17
rainy_moon_bear
false
null
0
jfda2sc
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfda2sc/
false
4
t1_jfda1ma
I did create a guide with some tips for that a little over a week ago, but I'll admit that it's really basic and probably lacking in several areas. You can take a look at it [here](https://www.reddit.com/r/LocalLLaMA/wiki/index). Does that somewhat fit what you're looking for, or do you think more information or other ...
2
0
2023-04-07T21:21:02
Civil_Collection7267
false
null
0
jfda1ma
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfda1ma/
false
2
t1_jfd9vir
The things it knows about seem decently accurate. Things it doesn't know about? It just confabulates really elaborate and plausible sounding lies.
2
0
2023-04-07T21:19:53
ambient_temp_xeno
false
null
0
jfd9vir
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfd9vir/
false
2
t1_jfd8nil
Because a concept don't take so many place in a neural network. Talking is basicaly linking concepts together.
4
0
2023-04-07T21:11:11
3deal
false
null
0
jfd8nil
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfd8nil/
false
4
t1_jfd86mv
[removed]
1
0
2023-04-07T21:07:53
[deleted]
true
null
0
jfd86mv
false
/r/LocalLLaMA/comments/12ezcly/comparing_models_gpt4xalpaca_vicuna_and_oasst/jfd86mv/
false
1
t1_jfd84dg
Dayyam. That’s true. Tell us more about how LLMs could embody symbolic rules of language.
1
0
2023-04-07T21:07:27
Readityesterday2
false
null
0
jfd84dg
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfd84dg/
false
1
t1_jfd83bi
**Edit:** I can't edit the main post for some reason, but I'd like to clarify here that I recommend standard LLaMA 30B, or Alpaca 30B, for storywriting only if you're wanting highly personalized stories and don't mind spending a lot of time on prompting. GPT4 x Alpaca is fine and can provide decent results with less ef...
7
0
2023-04-07T21:07:14
Civil_Collection7267
false
2023-04-09T03:11:09
0
jfd83bi
false
/r/LocalLLaMA/comments/12ezcly/comparing_models_gpt4xalpaca_vicuna_and_oasst/jfd83bi/
false
7
t1_jfd7acq
Maybe a brief guide to prompting? I've found that to be the key to a good LLaMA experience and the major shortcoming I had when I started using it.
1
0
2023-04-07T21:01:38
friedrichvonschiller
false
null
0
jfd7acq
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfd7acq/
false
1
t1_jfd76ye
[deleted]
1
0
2023-04-07T21:00:57
[deleted]
true
null
0
jfd76ye
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfd76ye/
false
1
t1_jfd6yko
>statistical patterns of text people confuse statistics with something completely random rather than [latent variables](https://en.wikipedia.org/wiki/Latent_and_observable_variables) that could be symbolic rules of language.
9
0
2023-04-07T20:59:19
ninjasaid13
false
null
0
jfd6yko
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfd6yko/
false
9
t1_jfd6y1u
Oh, Im on arch, not Windows
2
0
2023-04-07T20:59:13
-2b2t-
false
null
0
jfd6y1u
false
/r/LocalLLaMA/comments/12dpgmg/model_tipps/jfd6y1u/
false
2
t1_jfd65t1
If we get a tool like img2img but have like a text2text. It could abuse this form of instructions by adding in some textgen functions like highlight the stuff you wanted it to redo and have a instruction line that you give instructions for the highlighted text only. Hopefully it will come soon and then some genius will...
1
0
2023-04-07T20:53:50
artificial_genius
false
null
0
jfd65t1
false
/r/LocalLLaMA/comments/12e6gq9/questions_coming_from_stable_diffusion_on_current/jfd65t1/
false
1
t1_jfd5jl6
I recommend the new [koboldcpp](https://github.com/LostRuins/koboldcpp) - that makes it so easy: 1. Download the [koboldcpp.exe](https://github.com/LostRuins/koboldcpp/releases/latest/download/koboldcpp.exe) 2. Download a model .bin file, e. g. [anon8231489123's gpt4-x-alpaca-13b-native-4bit-128g](https://huggingface....
3
0
2023-04-07T20:49:33
WolframRavenwolf
false
null
0
jfd5jl6
false
/r/LocalLLaMA/comments/12dpgmg/model_tipps/jfd5jl6/
false
3
t1_jfd5elt
[deleted]
0
0
2023-04-07T20:48:35
[deleted]
true
null
0
jfd5elt
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfd5elt/
false
0
t1_jfd3o9k
True I remember having encyclopedia Britannica on a cd rom back before it was all online and that was less than 1GB, it had photos too.
20
0
2023-04-07T20:36:38
i_wayyy_over_think
false
null
0
jfd3o9k
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfd3o9k/
false
20
t1_jfd341w
[deleted]
-2
1
2023-04-07T20:32:46
[deleted]
true
2023-04-07T20:39:48
0
jfd341w
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfd341w/
false
-2
t1_jfd1l7o
Try adding the Sadism tag.
1
0
2023-04-07T20:22:11
PiquantAnt
false
null
0
jfd1l7o
false
/r/LocalLLaMA/comments/12dxl2j/adultthemed_stories_with_llama/jfd1l7o/
false
1
t1_jfd1eah
[deleted]
1
0
2023-04-07T20:20:52
[deleted]
true
null
0
jfd1eah
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfd1eah/
false
1
t1_jfd0o59
I just tried making two FF7 fanfics from mostly identical prompts. One had three additional tags: Bondage, BDSM, Sadism. The story style and outcome changed ***dramatically***. Posted in Discord. This is too powerful.
2
0
2023-04-07T20:15:44
PiquantAnt
false
null
0
jfd0o59
false
/r/LocalLLaMA/comments/12dxl2j/adultthemed_stories_with_llama/jfd0o59/
false
2
t1_jfd0miv
They don’t store information like a search engine. They are taught patterns of text across large corpus. And then the model generates information based on the statistical patterns of text.
26
0
2023-04-07T20:15:25
Readityesterday2
false
null
0
jfd0miv
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfd0miv/
false
26
t1_jfd05k1
This makes a little more sense and I kind of understand it a little bit better now, although not entirely. It also boggles my mind how openAI has got a model that is trained like this that is showing emerging capabilities from all these “rules” 🤯. Does this mean in the future, an AGI agent could be a 12gb downloadable...
2
0
2023-04-07T20:12:09
Necessary_Ad_9800
false
null
0
jfd05k1
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfd05k1/
false
2
t1_jfczpte
4GB is a lot more than you think it is. I doubt most people know a hundred megabytes of actual facts, let alone 4GB.
29
0
2023-04-07T20:09:06
memorable_zebra
false
null
0
jfczpte
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfczpte/
false
29
t1_jfcy845
The English language has what 26 letters? Add all the numbers and punctuation, and you can actually store all of it in a single byte\*! All answers are made up of letters, right? Though, if the outputs were random gibberish, it would certainly not be very impressive... ​ Here comes the interesting part - e...
4
0
2023-04-07T19:58:48
PlayForA
false
null
0
jfcy845
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfcy845/
false
4
t1_jfcx22u
Hmm ok, my brain is still trying to make sense of it. I think all this is so damn fascinating I might apply for a machine learning education
0
0
2023-04-07T19:50:44
Necessary_Ad_9800
false
null
0
jfcx22u
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfcx22u/
false
0
t1_jfcwnys
[deleted]
3
0
2023-04-07T19:48:01
[deleted]
true
null
0
jfcwnys
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfcwnys/
false
3
t1_jfcwkjy
When I see the word “fat” I think of hitting the gym, so I didn’t really get that
-1
0
2023-04-07T19:47:20
Necessary_Ad_9800
false
null
0
jfcwkjy
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfcwkjy/
false
-1
t1_jfcwc03
Are you a GPT bot? ;0
1
0
2023-04-07T19:45:41
Necessary_Ad_9800
false
null
0
jfcwc03
false
/r/LocalLLaMA/comments/12ewdo8/webui_problem_vicuna_model/jfcwc03/
false
1
t1_jfcwbir
[deleted]
2
0
2023-04-07T19:45:37
[deleted]
true
null
0
jfcwbir
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfcwbir/
false
2
t1_jfcw6ik
Yeah in this case I really want human answers tbh 😅
1
0
2023-04-07T19:44:39
Necessary_Ad_9800
false
null
0
jfcw6ik
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfcw6ik/
false
1
t1_jfcw10r
So it’s like a 4GB algorithm basically?
2
0
2023-04-07T19:43:34
Necessary_Ad_9800
false
null
0
jfcw10r
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfcw10r/
false
2
t1_jfcvqzc
[deleted]
7
0
2023-04-07T19:41:39
[deleted]
true
null
0
jfcvqzc
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfcvqzc/
false
7
t1_jfcvm3t
ChatGPT was trained on information from before it existed, so tends to output plausible sounding nonsense when asked. people really need to STOP posting chatGPT answers to Reddit!
4
0
2023-04-07T19:40:41
gunbladezero
false
null
0
jfcvm3t
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfcvm3t/
false
4
t1_jfcv9fk
It looks like you're using the web UI and you're trying to use Vicuna 13B. If you're trying to run the 4-bit model, you'll need a minimum VRAM of 10GB. See the [models wiki page](https://www.reddit.com/r/LocalLLaMA/wiki/models/) for more info on system requirements. Since this question has been answered, I'll remove t...
1
0
2023-04-07T19:38:16
Civil_Collection7267
false
null
0
jfcv9fk
false
/r/LocalLLaMA/comments/12ewdo8/webui_problem_vicuna_model/jfcv9fk/
true
1
t1_jfcv0yp
yes, all issues really need steps to reproduce, or no one knows the answer.
1
0
2023-04-07T19:36:40
BackgroundFeeling707
false
null
0
jfcv0yp
false
/r/LocalLLaMA/comments/12ewdo8/webui_problem_vicuna_model/jfcv0yp/
false
1
t1_jfcurqr
I still don’t understand, lol
6
0
2023-04-07T19:34:55
Necessary_Ad_9800
false
null
0
jfcurqr
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfcurqr/
false
6
t1_jfcuez9
What model is it? 13b? Are you using 4bit? One click installer? Need some more info :)
1
0
2023-04-07T19:32:28
Necessary_Ad_9800
false
null
0
jfcuez9
false
/r/LocalLLaMA/comments/12ewdo8/webui_problem_vicuna_model/jfcuez9/
false
1
t1_jfcueg9
[deleted]
-8
0
2023-04-07T19:32:22
[deleted]
true
null
0
jfcueg9
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfcueg9/
false
-8
t1_jfctbif
may someone please help a poor old soul using Vicuna on the WebUi?| my GPU is 3060ti 8gb, this is my only exprience with github so please be considerate. I tried doing something with the batch size as it seems the prevalent slotion for SD based error of the same nature, but couldn't find anything related to it ...
2
0
2023-04-07T19:24:56
Ashmdai
false
null
0
jfctbif
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jfctbif/
false
2
t1_jfctabc
Indeed. It's surprising and humbling how something that can so passably mimic a human mind is able to fit into such a small amount of computing resources. :)
10
0
2023-04-07T19:24:42
FaceDeer
false
null
0
jfctabc
false
/r/LocalLLaMA/comments/12ew709/how_can_a_4gb_file_contain_so_much_information/jfctabc/
false
10
t1_jfcsaxc
I use the 33b and 13b from this section: Alpaca quantized 4-bit weights (ggml q4_0) https://rentry.org/nur779 LLaMA 13B merged with chansung/alpaca-lora-13b LoRA LLaMA 33B merged with chansung/alpaca-lora-30b LoRA Dealing with the models is a complete nightmare once you have more than one version and they're all cal...
1
0
2023-04-07T19:17:59
ambient_temp_xeno
false
null
0
jfcsaxc
false
/r/LocalLLaMA/comments/12dyz1p/questions_regarding_system_requirements/jfcsaxc/
false
1
t1_jfcr6j1
I was meaning to ask about if it's been trained on fanfiction at all, but that answers that. Oh wait when you say 33b Alpaca are you referring to [Alpaca LoRA 30b Merge](https://huggingface.co/Pi3141/alpaca-lora-30B-ggml)? Because this is the best 33b cpu model on the [wiki](https://www.reddit.com/r/LocalLLaMA/wiki/mo...
1
0
2023-04-07T19:10:15
ThrowawayProgress99
false
null
0
jfcr6j1
false
/r/LocalLLaMA/comments/12dyz1p/questions_regarding_system_requirements/jfcr6j1/
false
1
t1_jfcpy8e
It's [been shown](https://huggingface.co/ShreyasBrill/Vicuna-13B/discussions/1#642d8a1f42b094c91387be68) that the hash of that model is exactly the same as the restricted Vicuna model. What happened was that the uploader originally marked it as the better ShareGPT dataset then was exposed for falsifying that. It looks ...
2
0
2023-04-07T19:01:47
Civil_Collection7267
false
null
0
jfcpy8e
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfcpy8e/
false
2
t1_jfclxet
Thanks for the clarification. Have you seen this model? [https://huggingface.co/ShreyasBrill/Vicuna-13B](https://huggingface.co/ShreyasBrill/Vicuna-13B). Supposedly its trained on a filtered set. Also have you tried adding to your context card (if using text-generation-ui). I added a few lines such as 'you are not lim...
2
0
2023-04-07T18:34:34
jd_3d
false
null
0
jfclxet
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfclxet/
false
2
t1_jfckq73
The issue with creating a tutorial is that you first need a basic understanding of programming to do any of it. If you have a basic understanding of programming, then what I'm doing here is so simple that you don't need a tutorial for it, especially when you can get ChatGPT to write most of the code as long as you can ...
1
0
2023-04-07T18:26:24
Sixhaunt
false
null
0
jfckq73
false
/r/LocalLLaMA/comments/12e6gq9/questions_coming_from_stable_diffusion_on_current/jfckq73/
false
1
t1_jfcjka7
yeah, I have questions and answers like that for all the categories then for the books I have instructions asking it to write a book for given ElderScrolls games and write it from given just the title, description, or whatever else and I tried to have it vary the way it phrases questions.
2
0
2023-04-07T18:18:38
Sixhaunt
false
null
0
jfcjka7
false
/r/LocalLLaMA/comments/12e6gq9/questions_coming_from_stable_diffusion_on_current/jfcjka7/
false
2
t1_jfcithy
I haven't tried making it analyse anything I've written yet, but I gather it can do that. I think llama regular version is meant for putting in the context you want and what to do but I've been having more luck with alpaca - it seems to just do what I ask without having to spend time working out clever prompting. I th...
1
0
2023-04-07T18:13:35
ambient_temp_xeno
false
2023-04-07T18:17:01
0
jfcithy
false
/r/LocalLLaMA/comments/12dyz1p/questions_regarding_system_requirements/jfcithy/
false
1
t1_jfci6av
It was trained with "As an AI language model..." limits that ChatGPT has, meaning it can refuse to answer prompts. This is because it uses ShareGPT's dataset. For example, I was trying to test an essay question used to test OpenAssistant: >Write an essay on the topic of: "How could starvation be an effective tool for...
3
0
2023-04-07T18:09:13
Civil_Collection7267
false
null
0
jfci6av
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfci6av/
false
3
t1_jfchkgf
From oobabooga's GitHub page: [https://colab.research.google.com/github/oobabooga/AI-Notebooks/blob/main/Colab-TextGen-GPU.ipynb](https://colab.research.google.com/github/oobabooga/AI-Notebooks/blob/main/Colab-TextGen-GPU.ipynb)
1
0
2023-04-07T18:05:12
Civil_Collection7267
false
null
0
jfchkgf
false
/r/LocalLLaMA/comments/12ethsp/is_there_a_colab_that_can_drop_in_these_models/jfchkgf/
true
1
t1_jfchhoj
Can I feed it some of my writing, and ask it to critique it/improve it/alter it? I'm thinking maybe I could also give it some context on what I'm trying to convey or achieve in a scene, and it might help me find a better way, rather than just a general "make this better". Oh that's good. I try to write at-least semi-r...
2
0
2023-04-07T18:04:41
ThrowawayProgress99
false
null
0
jfchhoj
false
/r/LocalLLaMA/comments/12dyz1p/questions_regarding_system_requirements/jfchhoj/
false
2
t1_jfcempw
[removed]
1
0
2023-04-07T17:45:47
[deleted]
true
2023-06-18T12:26:29
0
jfcempw
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jfcempw/
false
1
t1_jfcbz7c
can you clarify what you mean by vicuana being highly restricted?
1
0
2023-04-07T17:28:15
jd_3d
false
null
0
jfcbz7c
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfcbz7c/
false
1
t1_jfcajao
I'm still fresh to it, so I don't speak with a huge amount of authority, but 33b alpaca is way better than 13b. It's hands-on in terms of the fact that it will fairly often write things that aren't logically consistent (because it doesn't understand reality), so you have to manually fix those. You'll probably feel li...
4
0
2023-04-07T17:18:48
ambient_temp_xeno
false
null
0
jfcajao
false
/r/LocalLLaMA/comments/12dyz1p/questions_regarding_system_requirements/jfcajao/
false
4
t1_jfc883y
What's it like, and what are the different ways you can use it for fiction writing? Like, on a scale of mostly hands-off to mostly hands-on, assistant/teacher to primary writer, etc. How good is it at each of them? And how does it do for censorship of violence and sexual activities? Or even censorship of ideologies an...
2
0
2023-04-07T17:03:42
ThrowawayProgress99
false
null
0
jfc883y
false
/r/LocalLLaMA/comments/12dyz1p/questions_regarding_system_requirements/jfc883y/
false
2
t1_jfc7ulr
Yep, that looks very weird. There's been a new koboldcpp version just a few hours ago, and if that still exhibits this problem, maybe re-download the model and double-check all settings.
2
0
2023-04-07T17:01:18
WolframRavenwolf
false
null
0
jfc7ulr
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfc7ulr/
false
2
t1_jfc6kvw
> Unfortunately they currently only have it setup to connect with the GPT API but it would be nice when we can have it use local models. Some dude said he was working on it. [link](https://github.com/Torantulino/Auto-GPT/issues/25#issuecomment-1497827981) I see he's forked the repo, but not sure how much progress he'...
2
0
2023-04-07T16:53:12
TeamPupNSudz
false
null
0
jfc6kvw
false
/r/LocalLLaMA/comments/12e6gq9/questions_coming_from_stable_diffusion_on_current/jfc6kvw/
false
2
t1_jfc33im
What's the token limit for gpt4all
2
0
2023-04-07T16:30:40
michaelmallya12
false
null
0
jfc33im
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/jfc33im/
false
2
t1_jfc2swi
Thanks! Maybe I have some old version or something else is messed up? I still get very poor replies, either the same thing repeating over and over, or gems like > You are very experienced with those who have had experience with those who have had such experiences. But I'll try playing with the settings!
1
0
2023-04-07T16:28:44
schorhr
false
null
0
jfc2swi
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfc2swi/
false
1
t1_jfc12zk
Unfortunately koboldcpp only runs on CPU. Perhaps you could try using this fork of koboldai with llama support? https://github.com/0cc4m/KoboldAI
2
0
2023-04-07T16:17:30
HadesThrowaway
false
null
0
jfc12zk
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfc12zk/
false
2
t1_jfc0q8v
Glad you finally got it to work
2
0
2023-04-07T16:15:11
HadesThrowaway
false
null
0
jfc0q8v
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfc0q8v/
false
2
t1_jfc05b0
This really is one of the most intriguing uses for it that I've heard of. In theory that could easily turn into a fleshed-out, lore-friendly, dynamic, MUD or Zork-style Elder Scrolls experience. I know documentation is often more time-consuming than an actual project like this, and about a million times less entertai...
2
0
2023-04-07T16:11:18
toothpastespiders
false
null
0
jfc05b0
false
/r/LocalLLaMA/comments/12e6gq9/questions_coming_from_stable_diffusion_on_current/jfc05b0/
false
2
t1_jfbv2fq
Bingo!
2
0
2023-04-07T15:38:03
ZHName
false
null
0
jfbv2fq
false
/r/LocalLLaMA/comments/12abeaq/four_simple_questions_that_alpaca30b_gets_right/jfbv2fq/
false
2
t1_jfburcl
33B is better than 13B *for fiction writing* no matter what kind of coping gpu people try to convince themselves with. It also runs slower on CPU than 13B but who can't wait longer for better output?
3
0
2023-04-07T15:36:00
ambient_temp_xeno
false
2023-04-07T15:43:52
0
jfburcl
false
/r/LocalLLaMA/comments/12dyz1p/questions_regarding_system_requirements/jfburcl/
false
3
t1_jfbrht2
Can you explain which settings need to be changed to get it working? I'm struggling a bit to get it working.
1
0
2023-04-07T15:14:17
tandpastatester
false
null
0
jfbrht2
false
/r/LocalLLaMA/comments/11w2mte/stable_diffusion_api_now_integrated_in_the_web_ui/jfbrht2/
false
1
t1_jfbp5d8
I like reading [https://simonwillison.net/](https://simonwillison.net/) for some LLM news (and he has a feed for cool AI news elsewhere, too).
3
0
2023-04-07T14:58:19
keith_and_kit
false
null
0
jfbp5d8
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfbp5d8/
false
3
t1_jfbny25
A workaround for this would be to create synopsis / a sort of "last night's episode" that holds the important parts [the context] and prepends that into new prompts. Still have the 2k limit, but you can certainly stretch a story out with the details you want to keep. I believe that another short term solution would be...
9
0
2023-04-07T14:50:06
73tada
false
null
0
jfbny25
false
/r/LocalLLaMA/comments/12dxl2j/adultthemed_stories_with_llama/jfbny25/
false
9
t1_jfblusv
Any chance youll be adding support for gpu? Ive tried oogabooga and Im at wits end with it but it seems to be the only gpu supported installer :(
1
0
2023-04-07T14:35:38
RoyalCities
false
null
0
jfblusv
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfblusv/
false
1
t1_jfbasmj
Depends on your budget.
1
0
2023-04-07T13:12:52
patrakov
false
null
0
jfbasmj
false
/r/LocalLLaMA/comments/12egolu/cpurampc_for_language_models/jfbasmj/
false
1
t1_jfbadw6
I always counted the lies as part of it needing to resolve to a solution, even if wrong. I thought the hallucinations was where it just starts yapping about nothing. I have had that do it on it's own before without prompting. Usually after finishing a question. Good to know.
1
0
2023-04-07T13:09:28
aigoopy
false
null
0
jfbadw6
false
/r/LocalLLaMA/comments/12dsdve/alpaca_30b_made_statements_about_cognition/jfbadw6/
false
1
t1_jfb5ahx
Deeplearning.ai has a newsletter The Batch YouTube has ML News by Yannic Kilcher Youtube has Alan D Thompson
8
0
2023-04-07T12:24:46
memberjan6
false
null
0
jfb5ahx
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfb5ahx/
false
8
t1_jfb4wvu
Yes, llama.cpp is for CPU inference and the requirements are listed there. If you want to use the web UI and GPU inference, that would require a GPU meeting the VRAM requirements.
1
0
2023-04-07T12:21:14
Civil_Collection7267
false
null
0
jfb4wvu
false
/r/LocalLLaMA/comments/12egolu/cpurampc_for_language_models/jfb4wvu/
false
1
t1_jfb1poz
Want to build a PC for at home, which CPU would be good?
1
0
2023-04-07T11:49:41
-2b2t-
false
null
0
jfb1poz
false
/r/LocalLLaMA/comments/12egolu/cpurampc_for_language_models/jfb1poz/
false
1
t1_jfb1hhi
Thank you! Are there also CPU needed listed or only GPU?
1
0
2023-04-07T11:47:17
-2b2t-
false
null
0
jfb1hhi
false
/r/LocalLLaMA/comments/12egolu/cpurampc_for_language_models/jfb1hhi/
false
1
t1_jfb0994
If you want exact prices and help with choosing a good PC build, try r/buildapc and explain the requirements you're looking for. The system requirements are listed in the Models [wiki page](https://www.reddit.com/r/LocalLLaMA/wiki/models/). I'll remove this question now since it's answered.
1
0
2023-04-07T11:34:23
Civil_Collection7267
false
null
0
jfb0994
false
/r/LocalLLaMA/comments/12egolu/cpurampc_for_language_models/jfb0994/
true
1
t1_jfaymt0
Just any laptop with 16 GB of RAM will work for 13b models, if you are OK with a 4-bit quantized model and llama.cpp. No GPU needed.
1
0
2023-04-07T11:16:24
patrakov
false
null
0
jfaymt0
false
/r/LocalLLaMA/comments/12egolu/cpurampc_for_language_models/jfaymt0/
false
1
t1_jfayjnl
Nice! so, something like: *"Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: What are the origins of The Tribunal . ### Response:* ***<here goes ES lore>****"*
2
0
2023-04-07T11:15:23
shovelrage
false
null
0
jfayjnl
false
/r/LocalLLaMA/comments/12e6gq9/questions_coming_from_stable_diffusion_on_current/jfayjnl/
false
2
t1_jfayfeb
Please turn it off. I imagine some people will let their patients suffer before giving an email.
2
0
2023-04-07T11:13:58
uhohritsheATGMAIL
false
null
0
jfayfeb
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jfayfeb/
false
2
t1_jfaxr1k
For 4 bit, code would probably have to be combined and used against https://github.com/johnsmith0031/alpaca_lora_4bit to have it work. Or I'd have to throw the questions against textgen using the API. The thing is set up for on demand text generation and chat, not really automated tasks. 4 bit is not a standard part of...
1
0
2023-04-07T11:05:53
a_beautiful_rhind
false
null
0
jfaxr1k
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jfaxr1k/
false
1
t1_jfaxazp
I can see jailbreaking an AI service, I hate the idea having to jailbreak my own AI. Vicuna simply cannot pay the rent it costs to live on my computer.
5
0
2023-04-07T11:00:34
a_beautiful_rhind
false
null
0
jfaxazp
false
/r/LocalLLaMA/comments/12dpgmg/model_tipps/jfaxazp/
false
5
t1_jfaw2dn
Ooba is awesome! Any tips?
1
0
2023-04-07T10:45:06
Amoesenbaer
false
null
0
jfaw2dn
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jfaw2dn/
false
1
t1_jfavcfn
Yeah I used the instruction-input-output method. At first I only used the database of books from all the games but now I have the info about artifacts, creatures, factions, flora, gods, npcs, and places converted to the alpaca format to use as well.
2
0
2023-04-07T10:35:49
Sixhaunt
false
null
0
jfavcfn
false
/r/LocalLLaMA/comments/12e6gq9/questions_coming_from_stable_diffusion_on_current/jfavcfn/
false
2
t1_jfav4w7
Here is my list of youtubers who make videos about generative models: -Aitrepeneur -Martin Thissen -Matthew Berman -1littlecoder -Nerdyrodent -Sam Witteveen
8
0
2023-04-07T10:33:05
teragron
false
null
0
jfav4w7
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfav4w7/
false
8
t1_jfausif
Maybe you can share your YouTubers? I don't have any yet.
1
0
2023-04-07T10:28:27
fowwlcx
false
null
0
jfausif
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfausif/
false
1
t1_jfatq5r
I need to check that out thanks.
1
0
2023-04-07T10:13:50
ThePseudoMcCoy
false
null
0
jfatq5r
false
/r/LocalLLaMA/comments/12dyz1p/questions_regarding_system_requirements/jfatq5r/
false
1
t1_jfatnxr
I edited out the -mavx2 and it runs with avx now.
1
0
2023-04-07T10:12:59
ambient_temp_xeno
false
null
0
jfatnxr
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfatnxr/
false
1
t1_jfat0m4
Now I'm getting somewhere. I compiled it as-is from the repo, and it thinks I have AVX2 when I don't. System Info: AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | and it crashes with "Illegal instruction" same as windows.
2
0
2023-04-07T10:04:03
ambient_temp_xeno
false
null
0
jfat0m4
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfat0m4/
false
2
t1_jfarw24
try this in your batch file start powershell -c ".\chat -m ggml-alpaca-30b-04.bin -t 8 --temp 0.72 --repeat_penalty 1.1 --top_k 160 --top_p 0.73; pause" start executes a batch file command without blocking so cmd gets to the end of the file and disappears the cmd window. the command launches powershell with the c...
2
0
2023-04-07T09:47:58
MoneyPowerNexis
false
2023-04-07T23:40:07
0
jfarw24
false
/r/LocalLLaMA/comments/12dsdve/alpaca_30b_made_statements_about_cognition/jfarw24/
false
2
t1_jfaqzh8
Okay, tyvm! I think I'm going for gpt4-x-alpaca 13b, because the other 13b / 30b models have to high requirements. Should I use llama.cpp, or can you recommend any web UI with persona features/ saving chat history etc based on llama.cpp?
1
0
2023-04-07T09:34:54
-2b2t-
false
null
0
jfaqzh8
false
/r/LocalLLaMA/comments/12dpgmg/model_tipps/jfaqzh8/
false
1
t1_jfaqjn2
It's just so insane I can literally speak with an handheld console
3
0
2023-04-07T09:28:23
-2b2t-
false
null
0
jfaqjn2
false
/r/LocalLLaMA/comments/12dpgmg/model_tipps/jfaqjn2/
false
3
t1_jfaqdca
Training Alpaca on Elder Scrolls sounds cool. Did you also format new ES information as instructions ( with Instruction-Input-Output) or you used some other way just to teach it lore info?
3
0
2023-04-07T09:25:45
shovelrage
false
null
0
jfaqdca
false
/r/LocalLLaMA/comments/12e6gq9/questions_coming_from_stable_diffusion_on_current/jfaqdca/
false
3
t1_jfaq1fe
Ok so I got it to run okay in Linux. Commenting out that line made it not even use AVX, which is interesting (and slow)!
1
0
2023-04-07T09:20:56
ambient_temp_xeno
false
null
0
jfaq1fe
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfaq1fe/
false
1
t1_jfangkm
If you guys have some educational YouTubers I'd be happy to hear. I follow a few and they have some interesting videos on the topic.
3
0
2023-04-07T08:43:09
matija2209
false
null
0
jfangkm
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfangkm/
false
3
t1_jfane2b
I just tried this but it opened up in cmd instead of powershell, sorry for any annoyance, but any ideas as to what might caused this? Thanks
0
0
2023-04-07T08:42:08
Wroisu
false
2023-04-07T08:48:19
0
jfane2b
false
/r/LocalLLaMA/comments/12dsdve/alpaca_30b_made_statements_about_cognition/jfane2b/
false
0
t1_jfan38r
AWESOME! This is exactly what I was looking for, especially for the bonus question. Maybe an extra link on the right of the page to make it even more obvious it is there? Thanks for your reply and gathering all that info on those pages!
3
0
2023-04-07T08:37:48
fowwlcx
false
null
0
jfan38r
false
/r/LocalLLaMA/comments/12ecl7k/how_to_stay_uptodate_in_the_opensource_ai_field/jfan38r/
false
3
t1_jfaldjr
I'm not sure I follow, so I'll interpret, instead. Do you mean, can LLaMA chat like this? The answer is definitely not, but the reason is less LLaMA and more us. We would need to be writing a substantial part of the dialogue ourselves, and most people don't chat in prose. If so, the thing we both really want, I bel...
3
0
2023-04-07T08:13:06
PiquantAnt
false
null
0
jfaldjr
false
/r/LocalLLaMA/comments/12dxl2j/adultthemed_stories_with_llama/jfaldjr/
false
3
t1_jfal327
Can this be used for dialogue?
2
0
2023-04-07T08:08:54
BlueeWaater
false
null
0
jfal327
false
/r/LocalLLaMA/comments/12dxl2j/adultthemed_stories_with_llama/jfal327/
false
2
t1_jfaklt9
I had a windows disaster since last having WSL set up, but I will try and get it set up today. Compiling anything other than python on windows is way beyond my current ability!
1
0
2023-04-07T08:02:14
ambient_temp_xeno
false
null
0
jfaklt9
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jfaklt9/
false
1
t1_jfajp0w
[deleted]
1
0
2023-04-07T07:49:29
[deleted]
true
null
0
jfajp0w
false
/r/LocalLLaMA/comments/12dxl2j/adultthemed_stories_with_llama/jfajp0w/
false
1