data
stringlengths
115
7.61k
bmk#1476: ~~BERT~~ ~~BART~~ ~~BORT~~ BURT BIRT bmk#1476: the trend is clear Louis#0144: FART Louis#0144: feedfoward adversarial routing transformer Louis#0144: ez bmk#1476: BLART, BLORT, and BLURT are also viable contenders Louis#0144: i need an LM named fart Louis#0144: pls Louis#0144: 🥺 bmk#1476: sei die änderung, d...
Ravna#1831: Distributed data generation is almost perfectly parallel with little communication overhead. Ravna#1831: Distributed training on a single NN is not. StellaAthena#3530: ^^ StellaAthena#3530: Even setting aside the fact that there are significant differences in what we are referring to when we say “the algori...
bmk#1476: (also our POC is the PM for TPUs and also the founder of TFRC i'm pretty sure) StellaAthena#3530: Oh shit is he StellaAthena#3530: I didn’t know that lol bmk#1476: (though he has expressed that there are limitations to what TFRC can and cannot hand out; we just don't know 100% what the limits are) bmk#1476: >...
bmk#1476: not promising bmk#1476: > GPT3 can take the seq_length up to 1024(max supported) max length is actually 2048 StellaAthena#3530: > How much contact have you had with Zak? Also you mentioned having access to a full TPU pod, was that a one-off or you had it for a considerable amount of time? @inoryy One of the p...
inoryy#0395: have you looked into it beyond discussions, even if on non-EAI projects? bmk#1476: i personally have not, i think some other members here have inoryy#0395: also by 'discussions' do you mean considering switching the project(s) to it or just talking about it in general? bmk#1476: we are not considering swit...
bmk#1476: > Is anyone here a native speaker of a language other than English and would be interested in helping some time a few months down the road with a dataset project? We'll be asking for your feedback on dataset text quality in your native language. Please DM me if you're interested. Louis#0144: cnn's kat kinsman...
cfoster0#4356: @gwern I think they do claim that about BORT Louis#0144: LOL Louis#0144: We have like 6 people who went to the subreddit Louis#0144: Almost instantly Louis#0144: LMAOOOO Louis#0144: I made it and it already said 10 people were viewing it Louis#0144: 6 rn bmk#1476: this is hilarious Louis#0144: https://c...
bmk#1476: the Oortcheckpoint bmk#1476: Oortbot Louis#0144: I urge everyone here to ask their LM about Oorts Louis#0144: it must be a conspiracy! cfoster0#4356: @gwern compare the GPU hours. It's 300 vs 1100 or 26000 gwern#1782: it's comparing the regular pretraining with the distillation, not roberta cfoster0#4356: The...
asparagui#6391: http://www.infinityplus.co.uk/stories/colderwar.htm asparagui#6391: a precursor to the laundry files AI_WAIFU#2844: I can't remember if we've had this discussion before, what are everyone's odds that GPT-3 is strong superhuman at what it does, putting probabilities on string completions? bmk#1476: can't...
cfoster0#4356: ^ gwern#1782: _shrugs. there is no evidence whatsoever that GPT-3 has human-level prediction, and all the evidence is otherwise; if that doesn't convince AI_WAIFU, then there's really nothing more to say_ cfoster0#4356: Lol cfoster0#4356: I don't think it's human level at general language prediction. But...
> 3. more subjects tried to guess the target word > based on the target sentence only, until the word was guessed or the number of unsuccessful guesses reached 10; if no subject was able to guess the target word, the passage was added to the LAMBADA dataset. This is not general prediction of english. This is picking ou...
Deleted User#0000: by bayesian probability you mean the likelihood? Deleted User#0000: P_theta(data)? AI_WAIFU#2844: Yeah, P(data | model) Deleted User#0000: yea AI_WAIFU#2844: Not the the trained model mind you, what I'm referring to is P(data | source code) Deleted User#0000: Ah Deleted User#0000: how would they calc...
@AI_WAIFU well u can have multiple passes over m examples before seeing the m+1th, and still do u what u said, and it would be more accurate Deleted User#0000: tho still what u say is only an approximation coz u are not marginalizing over initializations AI_WAIFU#2844: That works too. And will give you better model evi...
Deleted User#0000: how is this helping against the "It's just memorizing the data" guys ? Deleted User#0000: well i guess, what is it showing beyond what u see from test data AI_WAIFU#2844: Because if you have a 10kb program that ingests 300GB of text and shrinks it down better than the best compression algorithms, yo...
bmk#1476: i mean as long as the loss is on held out data and we know we can get more data it should be fine, no? Deleted User#0000: (thats for the 3rd result) AI_WAIFU#2844: I meant cross validations. Deleted User#0000: ah over all choices of split, and over all split sizes AI_WAIFU#2844: Yup. AI_WAIFU#2844: When peopl...
AI_WAIFU#2844: Let me illustrate with an example. AI_WAIFU#2844: Suppose you have a process that draws images from a finite list of images. AI_WAIFU#2844: You can make a model that guesses blindly at first and then puts delta function densities on the images it's seen. AI_WAIFU#2844: Or you can make a model that does t...
AI_WAIFU#2844: Since you're directly evaluating the model evidence of your learning program that you wrote. AI_WAIFU#2844: You can get more accurate by keeping track of preformance on all newly seen data. Deleted User#0000: > You can get more accurate by keeping track of preformance on all newly seen data. @AI_WAIFU wh...
AI_WAIFU#2844: If the parameters of the network overfit to the data you're currently looking at because of correlations in the train data stream. That seems like a feature, not a bug. Deleted User#0000: it means that your performance metric is over-optimistic AI_WAIFU#2844: No? Deleted User#0000: assuming in actual app...
Deleted User#0000: (i still get the bayesian story of it being nice and stuff, but u cant really prove itunless u assume ur priors are good and i donno stuff) Deleted User#0000: (which tbf is probably fair to assume in many cases) AI_WAIFU#2844: Nope, but in non-iid environments, I don't know what to go off of other th...
Deleted User#0000: coz if u start at initialization theta, and generate the outputs from theta, it wont learn anything from the start? Deleted User#0000: coz its already optimal? AI_WAIFU#2844: There's gonna be random drift though. Deleted User#0000: not sure.. Deleted User#0000: loss is a global minimum Deleted User#0...
Deleted User#0000: so im imagining you are running a transformer or something autoregressively, but every time it samples an output, it uses that output to train itself? AI_WAIFU#2844: yes Deleted User#0000: why wouldnt it just learn: ok so i make output i which i sampled more likely. in the next step its more likely t...
Deleted User#0000: very hmm AI_WAIFU#2844: Thus you get parameter drift when doing SGD bmk#1476: Wait so what does gradient being zero in expectation imply? bmk#1476: Oh wait it's only zero when it's at the correct place Deleted User#0000: which it always is AI_WAIFU#2844: But in this case *everywhere* is the correct p...
AI_WAIFU#2844: and converge to a parameter attractor. AI_WAIFU#2844: The thing me and @Deleted User we're talking about earlier. bmk#1476: Sorry I haven't been following the discussion bmk#1476: And I'll probably ask tomorrow AI_WAIFU#2844: Where you view the LM training program as a model and evaluate its probablity u...
*Address correspondence to catgirlresearch@eleuther.ai AI_WAIFU#2844: Honestly, you don't need a tonne of compute to demonstrate the point. AI_WAIFU#2844: I'll put it on the backlog. bmk#1476: That would jive with our whole thing very well bmk#1476: The whole "casual research" thing bmk#1476: I cannot wait for a paper...
AI_WAIFU#2844: ok fr I gotta go to bed. Deleted User#0000: i mean i can do math but engineering is a different skillz Deleted User#0000: but yeah i should go to bed too Deleted User#0000: good night bmk#1476: I barely know how derivatives work lol, i gave up like a quarter of the way into diffgeo bmk#1476: Anyways yeah...
bmk#1476: If you ever figure it out lmk lol Deleted User#0000: will do Deleted User#0000: one day.... bmk#1476: Ok it is actually seriously really sleep time this time Deleted User#0000: 5am Deleted User#0000: kek Deleted User#0000: gn chirp#4545: https://www.hpcwire.com/2020/11/02/aws-ultraclusters-with-new-p4-a100-in...
Of course it would be challenging to create or gather an ensemble of capable and performant recognition / segmentation modules. But AI systems in these areas keep progressing constantly, such that it will become increasingly easier to create ensembles which will capture more and better abstract features. spirit-from-...
Deleted User#0000: yeah, then i don't know about gpt2-simple Bedebao#4842: It seems EleutherAI is getting more mentions on 4chan. bmk#1476: link pls Bedebao#4842: today and yesterday saw a surge https://arch.b4k.co/_/search/boards/v.vg.vm.vmg.vrpg.vst/text/eleutherai/ cfoster0#4356: AI Dungeon just announced a bunch of...
cfoster0#4356: I'm still shocked how high quality discussion here is, generally bmk#1476: I know, right? bmk#1476: I don't think we've ever had to even ban anyone bmk#1476: I warned that one guy once but that ended in an interesting conversation Bedebao#4842: This server is nearing 1k users, right? cfoster0#4356: Was o...
@bmk why is this worth avoiding? bmk#1476: ¯\_(ツ)_/¯ StellaAthena#3530: Lol WAUthethird#4977: yeah, as part of Latitude I hope beyond hope you guys succeed with gpt-neo this OpenAI pricing is not great Bedebao#4842: Sounds like OpenAI is pretty much swindling you guys. bmk#1476: @WAUthethird i thought you guys got pref...
Bedebao#4842: >gapingai didn't make it at least you can redeem yourselves with CHUNGUS bmk#1476: and HUMONGOUS bmk#1476: and the Pile data architecture™ StellaAthena#3530: It’s worth noting that Eleutheria, in addition to being the word “liberty” was also used as a proper noun to refer to a deification or personificati...
XMaster96#7538: > Could you tell us a bit more about this @XMaster96 ? Is this just about Eleuther or are other people invited too? I'm a bit confused @Daj To be fair I am a bit confused my self, who is a member of Eleuter and who has contributed to `GPT-Neo`/`The Pile`. I believe Lucidrains is not a member but has co...
sounds like my kind of zoom call WAUthethird#4977: By the way, for those interested in how we're planning on balancing OpenAI's costs while providing a fair deal to the AI Dungeon community, we just made this post: https://aidungeon.io/2020/11/07/ai-energy-update/ gwern#1782: the unlimited all-you-can-eat struck me as ...
bmk#1476: Not having any PR whatsoever has its.. disadvantages bmk#1476: > intentionally less filtered ... Likely lower quality *W h a t* bmk#1476: Man, and 4chan hasn't even realized that it's the *codebase* that's called GPTNeo bmk#1476: What if they find out that we're considering BigCHUNGUS bmk#1476: And MassiveCH...
@gwern i think they're talking about some other group that is talking about us gwern#1782: well, someone is wrong about both bmk#1476: Also they're kind of wrong about our data too bmk#1476: Unless 20% is "in large part" gwern#1782: sure, but being wrong about gpt-3 is lulz. I mean, it's in the paper. it's not like the...
bmk#1476: Also, word got out about the gpt2 model we plan on putting out, but word didn't get out that it's not trained on the Pile gwern#1782: it's good to have an enthusiastic userbase with a clear usecase bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/774790740857061376/Screenshot_2020-11-07-17...
bmk#1476: ちゃんちゃんちゃんちゃん cfoster0#4356: five guys, four chans, three turtle doves and a part ridge in a pair trie bmk#1476: ~~gwern is four people called Chan in a trenchcoat confirmed~~ cfoster0#4356: Chan, Chan, Chan, Chan (2020). guac#4716: https://tenor.com/view/jackie-chan-meme-gif-5480485 StellaAthena#3530: > 🏳️‍⚧...
Louis#0144: thats what I was worried about Louis#0144: I was hoping maybe there was so coreference magic Louis#0144: but I fear youre right StellaAthena#3530: Obviously the general problem or recognizing when the subject is from a certain reference class is AGI-hard Louis#0144: what if I go in the other direction tho.....
StellaAthena#3530: You could say that no DO can be a subject Louis#0144: up to 4 StellaAthena#3530: ah StellaAthena#3530: Telling the difference between "Stella found the elephants," and "Stella found the peanuts." seems like a nonstarter tbh Louis#0144: Elephants are people Louis#0144: according to COMET StellaAthena#...
Louis#0144: how so Louis#0144: also it would fuck the narrative, I dont really mind tho Louis#0144: this is just a proof of concept Louis#0144: > "Stella wanted to see the elephants. So Stella drove to the zoo. At the zoo, Stella saw the elephants being cared for. Stella wanted to feed the elephants. Stella found peanu...
Louis#0144: place holder characters? Louis#0144: PersonX Louis#0144: PersonY Louis#0144: etf Louis#0144: etc ** gwern#1782: (the only reason that neither is involved in the Resurrection of Jesus Christ is because the Catholic Church spent 2000 years burning every 'apocryphal' version, fearing their religious power) Ste...
cfoster0#4356: Check the subject, then go back with it filled in cfoster0#4356: err. s/subject/person/g Louis#0144: Batman swung the bat at the man. The Batman didn't just kill the man, he killed the man's family- it was a woman and her children. The Batamans are a very large, very powerful, extremely wealthy and ver...
He won't let you fuck him. He just doesn't give a fuck. The Batman is just a very good guy, but he's also very mean. He makes sure that he's very nice. And he's got a very big dick. And sometimes he's really angry. And if you're not, well... Well, then... Louis#0144: It's been explained before. I'm a cop. I can take yo...
Louis#0144: WELL FUCK YOU TOO gwern#1782: it *is* a stupid qustion. how stupid do you have to be to wonder why the goddamn batman needs some delicious refreshments gwern#1782: like Jesus, Batman is half divine and half human. it's what lets us identify with him as our savior Louis#0144: https://twitter.com/lcastricato/...
bmk#1476: is this the pile rt? Louis#0144: yes bmk#1476: oh no Louis#0144: those are different beams Louis#0144: w/ nucleus sampling cognomen#6297: might need a higher temperature cognomen#6297: just a hunch Louis#0144: @bmk gets worse Louis#0144: only happens when I use female names Louis#0144: lol AI_WAIFU#2844: what...
Louis#0144: thats good Louis#0144: i was getting worried bmk#1476: i mean... does this happen if you a) start a new Pile run from scratch, or b) start a C4 run Louis#0144: nah pretraining is done Louis#0144: this is the pile run bmk#1476: like. maybe this is just bad luck bmk#1476: maybe C4 would also have the same pro...
bmk#1476: like, Pile only bmk#1476: 2. what if you tune but ctrl+f the word `bitch` to something like `rubricalist` bmk#1476: and see if it says that bmk#1476: if it does then it's almost certainly a problem with eli5 AI_WAIFU#2844: https://old.reddit.com/r/explainlikeimfive/search?q=she%27s+a+bitch&restrict_sr=on&sort...
Louis#0144: It yells at me with women and Batman Louis#0144: It hated when Batman drank milk Louis#0144: Was ok with Spider-Man and Superman bmk#1476: what does it say with batman Louis#0144: Same thing AI_WAIFU#2844: what about joker? bmk#1476: huh Louis#0144: “Because he's a bitch. He's the kind of bitch to drink a g...
Louis#0144: Mostly sexist slurs Louis#0144: Whore Louis#0144: Slut Louis#0144: Etc bmk#1476: ok Louis#0144: Might honestly be Reddit bmk#1476: replace all incidences of those words with something random Louis#0144: I would not be surprised AI_WAIFU#2844: what if you change milk for something else. Louis#0144: Oh true L...
AI_WAIFU#2844: pumpkin spice lattes Louis#0144: Pumpkin spice lattes Louis#0144: LOL Louis#0144: Ok Louis#0144: That gives idea Louis#0144: Ideas Louis#0144: Ty bmk#1476: sidenote: pumpkin spice tastes bad cmv Louis#0144: I’ll try bmks idea too Louis#0144: PSLs are trash Louis#0144: They taste nothing like real pumpkin...
Louis#0144: Wtf guac#4716: hahaha bmk#1476: X doubt AI_WAIFU#2844: https://www.adweek.com/brand-marketing/the-first-starbucks-pumpkin-spice-latte-had-no-pumpkin-in-it/ bmk#1476: like, orange soda tastes nothing like oranges but it's amazing guac#4716: don't look up vanilla beaver bmk#1476: literally infohazard guac#471...
Louis#0144: Don’t like anything carbonated Louis#0144: I get so sick to my stomach guac#4716: lmao interesting interestiiiing gwern#1782: 'banana' is an interesting case. I keep hoping to spot gros michel bananas in person at some point to see if 'banana'-flavored candy really does taste like gros michel, just not regu...
AI_WAIFU#2844: I can barely tolerate water unless its basically at 0c bmk#1476: i have no idea what fahrenheit is, sorry bmk#1476: how many football fields per fortnight is that gwern#1782: (apparently the whole drinking-hot-water-is-good-for-you thing was another dumb Maoist communist trick, like 'traditional chinese ...
bmk#1476: been here before? AI_WAIFU#2844: you could say that gwern#1782: as an american, I can safely say that edmonton is a place name I have seen before. I don't know anything about it, but I *have* seen it before. be honored. bmk#1476: haha bmk#1476: we stand out on population density maps as "that one outlier dot ...
bmk#1476: we ser a record this year, apparently Louis#0144: That’s it bmk#1476: hey, WEM is *nice* Louis#0144: LMAO AI_WAIFU#2844: I like how a those temps it doesn't matter if its Fahrenheit or Celsius. bmk#1476: that's why i round it to -40 bmk#1476: so i can drop the qualifying C bmk#1476: anyways, it snowed quite a...
Louis#0144: Camel mating calls bmk#1476: thanks i hate it Louis#0144: @bmk @AI_WAIFU @gwern ok so it works for all foods and drinks Louis#0144: It has issues with people w female names eating or drinking Louis#0144: Except when I say beer for instance Louis#0144: It’s ok w “manly” food Louis#0144: Also it just really h...
cfoster0#4356: Huh. This just seems so weird. Never seen it with any other model Louis#0144: It’s a specific prompt AI_WAIFU#2844: If you just sample directly from the distribution, no funny business, what happens? Louis#0144: I sent it above Louis#0144: Doesn’t happen w other prompts Louis#0144: @AI_WAIFU idk Louis#01...
zphang#7252: cool, thanks! Deleted User#0000: 😦 https://cdn.discordapp.com/attachments/729741769738158194/775188006503710780/unknown.png zphang#7252: under NDA (Nyan-Disclosure Agreement) bmk#1476: Neko Disclosure Agreement gwern#1782: Non-neko Disclosure Agreement: "you agree to not disclose information about all our...
Deleted User#0000: > rapid progress curves given good evaluation/benchmarking/feedback: NN programmers could get superhuman quickly if we can give them good feedback on correctness what is it meant by "rapid progress curves given good evaluation/benchmarking/feedback" ? In the context of LMs, or is this about other ty...
FractalCycle#0001: 😳 FractalCycle#0001: *Nani?* bmk#1476: **verrückt** rapanui#0579: Hello, I was pointed to this Discord by u/SSCMeetup on Reddit- I missed the Sam Altman meetup, and by request he asked it not to be recorded. But apparently someone here has some notes/insights from the meeting? StellaAthena#3530: How...
he actually took back his answer a bit, later on - he said that once we have a good way to evaluate, we could surpass the best human programmers quickly bmk#1476: pls add all this info to the doc @Veedrac @chirp so we can keep it all in one place chirp#4545: > oa is currently focussing on faster inference I assume t...
Veedrac#0443: We're talking about a world where GPT-style models write code *as well* or *better* than humans, right? This naturally implies several other capabilities: bmk#1476: What if human never write the code again Veedrac#0443: 1) You can point at code and ask the model to tell you what it does, how it works, and...
StellaAthena#3530: I haven't seen any examples of that, and when I went looking a month or so ago I couldn't find any bmk#1476: i'd say that understanding code *can* be harder bmk#1476: especially *ahem* legacy code bmk#1476: spewing out more code is easy StellaAthena#3530: I would be very interested in examples and th...
bmk#1476: before, you'd keep a bunch of state around on your servers and maintain it by hand StellaAthena#3530: Though maybe there is really just a smooth gradient and I'm drawing meaningless boundries StellaAthena#3530: I don't know. bmk#1476: now you just spin up a blank slate and rebuild your entire system from scra...
bmk#1476: what python version bmk#1476: you need to use python 3.6 cfc#2691: oh thanks cfc#2691: i was on 3.8 Dal#7192: General-ish question. It's been mentioned that GPT3 derived arithmetic on its own from its dataset. Was it intentionally programed to model its own content or did the rule arise purely from regression...
Louis#0144: We literally have no appropriate metrics Louis#0144: We will get there eventually Dal#7192: That's to say, then: A: Your understanding is that GPT3 associated a rule from within its existing dataset to derive arithmetic B: Question Answering is considered insufficient for gleaning insight from the mature mo...
Louis#0144: And B is also kinda wrong because what you’re more interested in is how the model stores internalized representations (which QA won’t really tell you) Louis#0144: Nah Louis#0144: Most ML outside of DL is relatively straight forward Louis#0144: SVMs are straight forward for instance Louis#0144: Most Bayesian...
Dal#7192: 🤔 Louis#0144: There’s so many ways to define a concept Louis#0144: No one agrees Dal#7192: I could take a stab but it'd just be even more ambiguous 😅 Louis#0144: Do you mean that it’s high information content? Dal#7192: More or less Dal#7192: My own model of information is basically a collection of associat...
Louis#0144: How do we compare the representations that LMs like GPT3 uses to representations that other ML methods make Louis#0144: Or that symbolic models make Louis#0144: And I said there’s no real way to do so Louis#0144: Since we don’t have metrics for analyzing the representations DL makes effectively Louis#0144: ...
Dal#7192: Thanks. More to study. Dal#7192: I'm slowly filling out a vocabulary of actual terms in the field, this is getting good 😄 Dal#7192: (tldr I view NN/DL as something akin to building instincts rather than building full minds but that theory is still cooking) StellaAthena#3530: If you are good at topology I hav...
dudekingbromanguyokay#2595: Ah, probably better to say "I can't," in this context 🙂 I can edit some code that I run, but it's not like I can create stuff from scratch. dudekingbromanguyokay#2595: I do use Jupyter, Google Colab + Python, have read (some) transformer papers, etc, and know the basics of javascript, ruby,...
StellaAthena#3530: But still StellaAthena#3530: “Build paths where people walk” bmk#1476: i'll make it so that if you install it through pypi it'll expose a single function for pulling The Pile bmk#1476: and write all our documentation around that bmk#1476: and then the replication stuff will be under Here Be Dragons S...
bmk#1476: @cfoster0 i think you might want to remove the "this announcement does not exist", i think it's kind of confusing StellaAthena#3530: @bmk ah right. bmk#1476: nit: typo: "repsectively" bmk#1476: nit: link "aligning artificial intelligences" actually resumes from somewhere in the middle of the video and not the...
PhilPapers ExPorter # General internet OpenWebText2 StackExchange Wikipedia CommonCrawl # Prose Bibliotik Gutenberg BookCorpus # Dialogue UbuntuIRC HackerNews EuroParl YTSubtitles Opensubtitles
# Misc DMMath EnronEmails Github ``` cfoster0#4356: Thanks, fixed. @bmk StellaAthena#3530: @bmk I’m in bed but I can do it in the morning bmk#1476: that would be great cc_#1010: out of curiosity what's the progress on the pile and gpt-neo cc_#1010: just very broad "where we at" check bmk#1476: pile is almost done cc_...
StellaAthena#3530: GPT-Neo probably mostly works bmk#1476: analysis will be most of the next month and a half StellaAthena#3530: We haven’t trained it on GPT-3 scales yet because $$$$ bmk#1476: well, that's technically true but not entirely accurate imo StellaAthena#3530: Also because we need to finish the data first b...
cc_#1010: are those a thing only google has access to? StellaAthena#3530: We’ve also done experiments with larger scales but not full model runs bmk#1476: too expensive bmk#1476: way too expensive cc_#1010: how much too expensive bmk#1476: let me look it up, one moment cc_#1010: so realistically without some sort of in...
cc_#1010: which reply cc_#1010: mine or bmks StellaAthena#3530: > my parents would never let me leech that much money off of them for something that benefits other people lmao @cc_ cc_#1010: ah cc_#1010: they're just neoliberals cc_#1010: wealthy ones with lots of money cc_#1010: and properties bmk#1476: what order of ...
bmk#1476: there is no way we're getting >$1MM through any way, and so that possibility is not up for serious consideration cc_#1010: great StellaAthena#3530: @bmk you’re probably overestimating the cost. If we had a wealthy patron we could buy DHX-2s cc_#1010: so we'd have to acquire the relevant hardware through some ...
StellaAthena#3530: Anyways there’s a program where Google gives worthy poor people free TPUs. That’s what we are currently using, but with our current level of compute that would take years. StellaAthena#3530: (Actually, initializing the model would time out the pods so it’ll take forever) StellaAthena#3530: We are wor...
cc_#1010: if someone can wrangle up a list of relevant costs i can probably take some stuff off people's hands bmk#1476: our finances are a mess cc_#1010: it's money that i'm decidedly *not* going to use because i want for nothing cc_#1010: someone should probably handle that lmao cc_#1010: yall need a treasurer bmk#14...
bmk#1476: yeah, i've been way too busy writing up code bmk#1476: we barely have documentation cc_#1010: im willing to shoulder the atlasian responsibility (/j) if it's a thing people think we'd need cc_#1010: or would be helpful to have at least cc_#1010: i dont really do much with my day bmk#1476: for me priorities ar...
StellaAthena#3530: Oh we can absolutely put you to work bmk#1476: C#? cc_#1010: i use GMS bmk#1476: what's that? cc_#1010: which is more like javascript than anything cc_#1010: gamemaker studio 2 bmk#1476: ah bmk#1476: never heard of it cc_#1010: it's good stuff StellaAthena#3530: Is it a GUI? bmk#1476: python is like ...
bmk#1476: i have a minor personal vendetta against google sites so a custom website would be great cc_#1010: do you not have a custom website to begin with cc_#1010: why are you using google websites bmk#1476: nope bmk#1476: everyone is too busy writing code bmk#1476: no time to do website stuff StellaAthena#3530: Beca...
StellaAthena#3530: “Draw” can mean “design on a computer” bmk#1476: :smallbrain: CC :bigbrain: CC cc_#1010: i can't really make any images for you that you couldn't already find somewhere else StellaAthena#3530: Shame cc_#1010: i offer financial stuff since that's stuff i can do in my spare time that's not already occu...
cc_#1010: but i figure if i get hired for an entyr level python position i can learn on the job StellaAthena#3530: Skills sections on resumes are all lies cc_#1010: i did learn it at one point in my life cc_#1010: when i was like 16 cc_#1010: and it is all gone now bmk#1476: python is literally spicy pseudocode cc_#101...
bmk#1476: and managing that many people across multiple projects spanning probably months to a year or so will be very complicated bmk#1476: actually, step 1 is getting that many people interested cc_#1010: managing them to do... what? bmk#1476: so we want to have native speakers to have input on our various multilingu...
cc_#1010: oh now thats a list cc_#1010: now that i could handle bmk#1476: the problem is that gathering literal dozens of people with that kind of strict requirements takes a lot of time StellaAthena#3530: 1 and 6 are intermittent tasks 2, 3, and 4 are one-off tasks 5 is a continuous task bmk#1476: so we want to get st...
bmk#1476: though more can't hurt cc_#1010: which is partially why im interested in this project since openAI rejected my gpt-3 proposal and i figured i'm never getting it cc_#1010: i mean i could talk to DeepLeffen cc_#1010: we've chatted in the past StellaAthena#3530: Is that a GPT-X trained on Leffen tweets? cc_#1010...
cc_#1010: ping me tomorrow because i have severe unmedicated adhd and i will forget otherwise cc_#1010: guaranteed StellaAthena#3530: Definitely cc_#1010: https://pbs.twimg.com/media/Ei75UYOXkAEEO-u?format=png&name=900x900 cc_#1010: a gift for you before bed StellaAthena#3530: I have severe medicated ADHD that’s nevert...
> Step 4: Don’t write boilerplate > Step 5: Write a big lump of code > Step 6: Break your code into pieces > Step 7: Keep writing code FractalCycle#0001: > Step 1: write code > Step 2: lol no I think this is what's known as "best practices" in the industry StellaAthena#3530: Can confirm Noa Nabeshima#0290: Ok, but to w...
@gwern _Paul Graham is typing_ shawwn#3694: congrats on surpassing TPU Podcast in member count gwern#1782: @Daj I think pg would say 'well ok python is good enough now' gwern#1782: even if lisp was a secret weapon eons ago back in 1995, in 2020, I do not think he would say today that choice of programming language is i...
shawwn#3694: That's your bar. You can either believe me, or believe the evidence (that even the great antirez failed), or the fact that lobsters *probably* wouldn't survive at HN-scale shawwn#3694: I can certainly go into reasons and specific details of *why* those things are true, but few people believe it's true in t...
MasterScrat#6910: Hello everyone. I'm working on a natural language generation project. I recently switched from gpt2-xl to megatron-11b and it lead to significant improvement for my use case. So naturally, I'm looking for any available larger model (that I can have full control over). What is the status of gpt-neo? ar...
StellaAthena#3530: Okay I’ll let @bmk talk because he knows better than me bmk#1476: our model is likely more than a few pct less efficient MasterScrat#6910: i see! what is the largest model you've trained so far, and how does it compare to the closest published GPT model? MasterScrat#6910: and, where exactly do the di...
StellaAthena#3530: GPT-2 scale MasterScrat#6910: 1.5B model? how many TPU hours did it take and with which TPU version? MasterScrat#6910: Also I imagine prices grow even larger on GPUs? gwern#1782: (with A100s rolling out in datacenters, one hopes prices will finally drop) StellaAthena#3530: The people who know the ans...
Bedebao#4842: OpenAI is too scared to unleash Skynet. EleutherAI doesn't care. cfoster0#4356: lies! We care and don't have a clue what to do 🤔 StellaAthena#3530: > OpenAI is too scared to unleash Skynet. EleutherAI doesn't care. @Bedebao this is the exact opposite of the truth tho bmk#1476: the road to paperclippifica...
spirit-from-germany#1488: Do you think the 40gb VRAM are enough to finetune GPT-2 770M ? Or even 1,5? gwern#1782: of course. 1.5b was trained on 16GB V100s, iirc spirit-from-germany#1488: hmm... when I checked if I could finetune GPT2 on Colab Pro with P100s, it always got OOM errors chirp#4545: https://www.reddit.com/...
Ken#8338: Interesting article discussing Nick Bostrom's new working paper about future AGI and utilitarianism: https://www.bbc.com/future/article/20201111-philosophy-of-utility-monsters-and-artificial-intelligence gwern#1782: @Airatak not necessarily what you think of by text perhaps but https://www.gwern.net/GPT-2-mu...
Airatak#7842: btw you guys think you could host the GPT Neo model on huggingface? That would be real awesome and convenient Sid#2121: sure @Airatak that sounds like a great idea eventually Sid#2121: a question: Does anyone have any idea the kind of hardware OpenAI used to train GPT3? Specifically the type of interconne...