ACL-OCL / Base_JSON /prefixE /json /eacl /2021.eacl-demos.29.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:45:52.482313Z"
},
"title": "Story Centaur: Large Language Model Few Shot Learning as a Creative Writing Tool",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Swanson",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kory",
"middle": [
"W"
],
"last": "Mathewson",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ben",
"middle": [],
"last": "Pietrzak",
"suffix": "",
"affiliation": {},
"email": "bpietrzak@google.com"
},
{
"first": "Sherol",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {},
"email": "sherol@google.com"
},
{
"first": "Monica",
"middle": [
"Dinalescu"
],
"last": "Google",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Few shot learning with large language models has the potential to give individuals without formal machine learning training the access to a wide range of text to text models. We consider how this applies to creative writers and present STORY CENTAUR, a user interface for prototyping few shot models and a set of recombinable web components that deploy them. STORY CENTAUR's goal is to expose creative writers to few shot learning with a simple but powerful interface that lets them compose their own co-creation tools that further their own unique artistic directions. We build out several examples of such tools, and in the process probe the boundaries and issues surrounding generation with large language models.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Few shot learning with large language models has the potential to give individuals without formal machine learning training the access to a wide range of text to text models. We consider how this applies to creative writers and present STORY CENTAUR, a user interface for prototyping few shot models and a set of recombinable web components that deploy them. STORY CENTAUR's goal is to expose creative writers to few shot learning with a simple but powerful interface that lets them compose their own co-creation tools that further their own unique artistic directions. We build out several examples of such tools, and in the process probe the boundaries and issues surrounding generation with large language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "One of the most promising possibilities for large language models (LLMs) is few-shot learning (Brown et al., 2020) in which it is possible to create text to text models with little or no training data. Few shot learning with LLMs relies on the ease at which the desired Input/Output (I/O) behavior can be effectively translated into the text continuation problem at which these models excel. In recent years, LLMs have progressed to the level where this translation requires only familiarity with the natural (e.g. English) language on which the model was trained.",
"cite_spans": [
{
"start": 94,
"end": 114,
"text": "(Brown et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present STORY CENTAUR, a Human-Computer Interface that closes the gap between non-technical users and the power and possibilities of few shot learning, with the intended audience of writers of creative text. It is our intention that by giving writers a tool for building generative text models unique to their process and vision that these artists will experience genuine feelings of co-creation. STORY CENTAUR consists of a prototyping UI as well as a set of Angular web components that interact via a central pub-sub synchronization mechanism 1 . Due to the non-trivial monetary cost of inference and the requirement of specialized hardware, the LLM that underlies our tool is not included in this release; instead we dependency inject the LLM with a simple (string) \u2192 string interface to be provided by an arbitrary service.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As the ethical implications of LLMs (Bender et al., 2021) are an important and unsolved problem in NLP, we highlight this design choice to decouple the LLM itself from STORY CENTAUR, a web based user interface that prepares the LLM's inputs and processes its outputs. To put it another way, the text generated is no more or less biased than the LLM and user themselves, as STORY CENTAUR's purpose is not to enhance or change the abilities of a LLM, but instead to democratize its use to non-technical users.",
"cite_spans": [
{
"start": 36,
"end": 57,
"text": "(Bender et al., 2021)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The observation that simple \"fill-in-the-blank\" neural network models trained on large quantities of text can be used for problems beyond their primary learning objective dates back to word2vec (Mikolov et al., 2013) in which word embeddings were able to perform some SAT style analogies. While this ability captured many researchers' fascination, these models' dual function as a representation learner from which other models could be initialized and/or fine-tuned took center stage in the following years.",
"cite_spans": [
{
"start": 194,
"end": 216,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Representation learning techniques made steady advances, expanding to sentence level contextually sensitive word embedding with ELMo (Peters et al., 2018) , the introduction of Transformers and MLM objectives with BERT (Devlin et al., 2018) , Figure 1 : A Few Shot Formula in action. The Data and Serialization are used to create the Prompt, which along with the serialized inference time Input becomes the Preamble. The LM generates a continuation, from which the Serialization extracts the output. The Sentinels used (with newlines omitted) are \"I saw a\", \"! It goes \", and \". -NEXT-\". and the menagerie of similar systems that followed. While most work concerned itself primarily with topping each other's GLUE and SUPERGLUE scores (Wang et al., 2019) , work from OpenAI kept the torch lit for investigating the emergent abilities of representation learning (Radford et al., 2017 (Radford et al., , 2018 (Radford et al., , 2019 Brown et al., 2020) . Their most recent work undeniably shows that sufficiently large LMs enable few shot models that approach and sometimes surpass state of the art performance on a wide range of NLP tasks.",
"cite_spans": [
{
"start": 128,
"end": 154,
"text": "ELMo (Peters et al., 2018)",
"ref_id": null
},
{
"start": 219,
"end": 240,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 709,
"end": 754,
"text": "GLUE and SUPERGLUE scores (Wang et al., 2019)",
"ref_id": null
},
{
"start": 861,
"end": 882,
"text": "(Radford et al., 2017",
"ref_id": "BIBREF18"
},
{
"start": 883,
"end": 906,
"text": "(Radford et al., , 2018",
"ref_id": "BIBREF19"
},
{
"start": 907,
"end": 930,
"text": "(Radford et al., , 2019",
"ref_id": "BIBREF20"
},
{
"start": 931,
"end": 950,
"text": "Brown et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 243,
"end": 251,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Human + AI co-creation has existed in both practice and theory for several years. To highlight some examples in practice that relate to creative writing, as opposed to music or visual art of which there are many, we refer the reader to browse the Electronic Literature Collection 2 a longstanding community of artist-technologists who have blazed this trail since the days of hypertext. A number of publications of AI co-creation exist as well on a diverse range of artistic applications (Martin et al., 2016; Mathewson and Mirowski, 2017; Oh et al., 2018; Mirowski and Mathewson, 2019; Kreminski et al., 2019; Sloan, 2019; Tsioustas et al., 2020) .",
"cite_spans": [
{
"start": 488,
"end": 509,
"text": "(Martin et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 510,
"end": 539,
"text": "Mathewson and Mirowski, 2017;",
"ref_id": "BIBREF13"
},
{
"start": 540,
"end": 556,
"text": "Oh et al., 2018;",
"ref_id": null
},
{
"start": 557,
"end": 586,
"text": "Mirowski and Mathewson, 2019;",
"ref_id": "BIBREF15"
},
{
"start": 587,
"end": 610,
"text": "Kreminski et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 611,
"end": 623,
"text": "Sloan, 2019;",
"ref_id": "BIBREF22"
},
{
"start": 624,
"end": 647,
"text": "Tsioustas et al., 2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For a lighter introduction, Case (2018) gives some examples of AI + Human collaborations, or Centaurs 3 , but primarily presents the argument that the HCI that connects the Human and Computer is of paramount importance, a sentiment that is in line with our own work. We also resonate with the opinion of Llano et al. (2020) , which argues for explainability as a catalyst for fruitful co-creation,",
"cite_spans": [
{
"start": 28,
"end": 39,
"text": "Case (2018)",
"ref_id": "BIBREF4"
},
{
"start": 304,
"end": 323,
"text": "Llano et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The core contribution of this work is a UI for the creation of few shot text generation models ( Figure 2 ). We first define terms for the components of LM based few shot modeling as it is decomposed in our system: The few shot learning system as a whole is represented as a Formula which when used with a large LM provides arbitrary text to text I/O behavior.",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 106,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Few Shot Formulas",
"sec_num": "3"
},
{
"text": "We note that while generally LMs refer to any probability distribution over a sequence of tokens, in this work we use the term to refer to the subset model class that factorizes the joint probability into conditional P (w t |w 1...t\u22121 ) terms. Put simply, we are referring to the \"predict the next word given all words so far\" variety of LM, which includes all of the GPT models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Few Shot Formulas",
"sec_num": "3"
},
{
"text": "A Formula is composed of Data and Serialization. Each item in the Data consists of lists of string inputs and outputs that exemplify the desired I/O. The Serialization defines a reversible transformation between the Data and the raw text handled by the LM. STORY CENTAUR uses a Serialization template of fixed text Sentinels that interleave the inputs and outputs; a Sentinel is defined to precede each input and output, as well as one that separates inputs and outputs and one that comes after the final output (See Figure 2 for an example). Carefully chosen sentinels are powerful tools for nudging the language model in desired directions (see the Appendices for examples), but must also be designed so as not to be confused with model input or outputs.",
"cite_spans": [],
"ref_spans": [
{
"start": 517,
"end": 525,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Few Shot Formulas",
"sec_num": "3"
},
{
"text": "A Formula is used by first invoking the Serialization on the Data, creating the Preamble. Then, the new inputs are converted using the Serialization and concatenated to the Preamble, creating the Prompt. The LM is asked to continue the text in the Prompt, and the Serialization is used to extract the output(s) from the result. The LM cannot explicitly enforce the Serialization format and as such will often produce non-conformant results, in which case it must be rejected. In practice, if the LM is sufficiently capable and the task well suited then a simple rejection sampler suffices to produce several acceptable options, as decoding is parallelizable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Few Shot Formulas",
"sec_num": "3"
},
{
"text": "STORY CENTAUR's user interface for Formula design is shown in Figure 2 , with supplemental screenshots in the appendices. While there is no fixed workflow, we have found the following process to be effective. We assume only that the user is proficient in English and has a strong concept of their desired I/O. First, the user must enter at least two examples of I/O pairs into the Data panel and take a pass at defining a Serialization, relying on the live updated Preamble panel to preview their progress. With a few examples in place, the Auto-Generate button can then be used to suggest new candidate IO pairs by passing the Preamble to the LM and allowing the user to prune these suggestions. This process can be repeated, quickly converging to several (10 or more) solid examples and clear evidence that the Serialization is being captured by the LM. As a final evaluation technique, we provide a Test mode that takes inference inputs and applies the current Formula, also reporting the rate at which the LM output respects the Serialization.",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 70,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Formula Development",
"sec_num": "3.1"
},
{
"text": "We showcase the potential of Formulas that one might create using STORY CENTAUR in several Experiments. These experiments all rely on one or more Formulas that were built using the development tool and workflow described above, and are each motivated by a different artistic scenario. When possible, we present the I/O specifications for each Experiment and invite the reader to view full Experiment screenshots as well as the underlying Formulas' Data and Serialization in the Appendices. Perhaps the most obvious application of generative language models to creative writing is overcoming writer's block. Specifically, we consider the scenario in which the writer has some existing seed text and wants to be presented with possible continuations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Writing Tool Experiments",
"sec_num": "4"
},
{
"text": "Generative LMs are ripe for this task as they can reliably continue short text; for the definition of LM used in this work (see Section 3) this is indeed exactly the task they were trained on. In this pure use of the LM the author is only able to provide the seed text, and so in this experiment we use a few shot Formula to provide an additional input of a word or phrase that is required to appear in the generated text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Magic Word",
"sec_num": "4.1"
},
{
"text": "The Magic Word formula takes two inputs: the seed text that must be continued by the model and the \"Magic Word\" that must be used in the continuation. In this Experiment, the generated outputs are not only discarded if they do not conform to the Serialization but also if the Magic Word does not appear as a substring. The UI allows editing of both the magic word and seed text, and on generation the user is given a maximum of three sentences that they can click to append to the editable main text box.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Magic Word",
"sec_num": "4.1"
},
{
"text": "From an academic perspective, it is worth noting that this I/O paradigm has been explored in several examples of previous work, often with the same motivation as a writer's aid (Ippolito et al., 2019) . Many literary characters have their own peculiar way of speaking; Yoda, Tolkein's dwarves, Treasure Island's Pirates. In this Experiment we address the scenario where a writer has a clear idea of what they want the character to say and want suggestions as to how their character might actually say it.",
"cite_spans": [
{
"start": 177,
"end": 200,
"text": "(Ippolito et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Magic Word",
"sec_num": "4.1"
},
{
"text": "We phrase this problem as a Formula with one input and one output in which the input is in neutral style and the output is a paraphrase with the desired style applied. This works nicely with few shot learning, as it is relatively easy to invent (or generate) a simple unstyled statement and then to imagine how a character might say it. We showcase several such Formulas in this experiment, se-lectable in a menu. For unstyled source text, there are three editable areas for text to rephrase that can be restyled individually or all at once.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Say It Again",
"sec_num": "4.2"
},
{
"text": "We provide one additional Formula that might be considered zero shot style transfer, although it is still performed using a few shot Formula. When the style \"CUSTOM\" is selected, an input box appears where the user can enter any raw text they wish. This text is then used in a Formula with two inputs, the text to be restyled and the name or description of the character whose style to use. The surprising result is that this is often possible with no examples of the requested style itself, only the proper Serialization and a few example of the full I/O shown in Figure 4 with other custom characters. As this information most likely comes from patterns and associations encoded into the LMs parameters during training, this method works best with fictional characters from major movies or celebrities.",
"cite_spans": [],
"ref_spans": [
{
"start": 565,
"end": 573,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Say It Again",
"sec_num": "4.2"
},
{
"text": "We encourage the further examination of large LMs for style transfer, as we were anecdotally impressed with the output of this experiment in particular. As some recently successful work in style transfer (Riley et al., 2020) already follows a label free approach that might itself be considered few shot in nature, interesting experimental comparisons are likely possible. Modern LMs excel at producing text that is coherent, grammatical, and at times interesting, and frequently amusing. However, cracks begin to show in coherence as generations grow longer (Cho et al., 2018) . A common mitigating technique has been to construct hierarchical generation systems in which a high level representation that is focused on common sense story structure which is then transformed into narrative text (Fan et al., 2018; Ammanabrolu et al.) . This trend inspires this experiment, whose goal is the co-creation of a short story that is both coherent and detailed.",
"cite_spans": [
{
"start": 204,
"end": 224,
"text": "(Riley et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 559,
"end": 577,
"text": "(Cho et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 795,
"end": 813,
"text": "(Fan et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 814,
"end": 833,
"text": "Ammanabrolu et al.)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Say It Again",
"sec_num": "4.2"
},
{
"text": "One ubiquitous quality of such hierarchical systems is that the high level representation is a structural and/or semantic abstraction chosen to be amenable to plot coherence modeling. This experiment poses the question: what if the high level representation was itself natural language? To explain our setup we make the distinction between simple text and colorful text, where the former is a grammatically bare bones statement of fact and the latter is more linguistically interesting, as a sentence might actually appear.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Story Spine",
"sec_num": "4.3"
},
{
"text": "We use two Formulas to accomplish this goal, shown in Figure 5 . The first takes a simple short plot point sentence as input and returns a plausible following simple plot point as output; this is used in a loop to generate the spine. The second is a context conditional paraphrasing formula with two inputs; the first is a simple plot point and the second is the \"story so far\" which is written in colorful text. The Formula's output is a paraphrase of the simple plot point, colorized to both respect the factual information of the plot point and the context of the story so far.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 62,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Story Spine",
"sec_num": "4.3"
},
{
"text": "The user is presented with an interface that lets them write and edit custom spine plot points as well as use the first Formula to generate up to five candidates for plot points to continue the story. Each spine plot point is connected to its colorized paraphrase, which appear as a whole on the right side. In order to maintain a model of the mapping between the spine and colorized text, the colorized text is not editable. Interesting characters are at the heart of much creative writing, and various template filling exercises exist to create them. Often this comes down to filling out a template containing fields that flesh out the character, as shown in Figure 6 . In this experiment, the user is presented with an editable template containing each of these fields, with the option to edit or clear any of the fields' values. Once a value has been cleared, it can be filled in by the LM conditioned on all current non-empty fields.",
"cite_spans": [],
"ref_spans": [
{
"start": 661,
"end": 669,
"text": "Figure 6",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Story Spine",
"sec_num": "4.3"
},
{
"text": "We take this experiment in a direction that goes beyond our own Formula development tools to define a flexible Few Shot model for data completion. Our generalized problem statement is as follows: given a set of fields of which an arbitrary subset are blank, for one such blank field generate a plausible value conditioned on the non-blank fields. We build a dynamic Formula creation system that fulfills this generalized contract, and apply it to the filling of character creation exercise forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character Maker",
"sec_num": "4.4"
},
{
"text": "Our few shot solution naturally relies on a small number of fully filled out and plausibly consistent fields (e.g. complete character descriptions). At inference time, we extract the subset of non-blank fields in the inference item from each of these few shot examples and stitch together a Formula on the spot with precisely these inputs and the single output of the desired inference output field. This dynamic creation of Formulas requires a flexible Serialization that can accommodate any field name and value in any order, which for this experiment we simple simply use \"name : value\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character Maker",
"sec_num": "4.4"
},
{
"text": "In improvisational acting (improv) one of the primary pleasures is to see actors bring a set of constraints provided by the audience to life in a coherent story. We see the potential for the sometimes wildly creative suggestions of large language models to supply these constraints, either as a tool for practitioners to hone their craft or as a way to spice up (or speed up) a live performance itself. Improv constraints must be both open ended and subject to specific categories; for example the popular \"Party Quirks\" game requires a personal quirk for each actor attending a dinner party. We build Formulas and UIs for several improv games, and note their distinction from the other Formulas in this work in that they require no user input at all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improv Prompts",
"sec_num": "4.5"
},
{
"text": "In constructing such zero input few shot learning models it became apparent that beyond controlling the grammatical form and semantic intent of the outputs we could also control their tone, as it would mimic the tone of the Formula's Data. Crucially, this allows easy adaptation of these tools to different audiences (children versus adults, for example) and an implicit nudge towards whimsical outputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improv Prompts",
"sec_num": "4.5"
},
{
"text": "While the experiments presented above demonstrate how few shot learning can be used to create interesting tools for writers, the real power of STORY CENTAUR is its unlocking of rapid experimentation. Not only were we able to probe the boundary of what \"works\" efficiently, but also to engage individuals regardless of formal machine learning training to help us to do so. Needless to say, in the course of this work many attempted Formulas did not produce compelling results. Perhaps our most interesting failure was to build a Formula that would produce the second half of a rhyming couplet given the first half, a task that would require understanding of both phonetics and meter as well as linguistic coherence. This was disappointing given the compelling examples of GPT-3 poetry available online 4 . One possible explanation is that while general poetry and specifically rhyming couplets are in our minds connected closely with a subset relationship rooted in human culture, the hard constraint of rhyme and meter in fact divides them into very different problems for an LM. It is certainly the case that recent successful work in rhyming metered poetry generation has needed to resort to fixed rhyme words and syllable counting (Ghazvininejad et al., 2017) .",
"cite_spans": [
{
"start": 1234,
"end": 1262,
"text": "(Ghazvininejad et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In terms of larger themes, we found that constructing Formulas in which any of the inputs or outputs were much longer than a few sentences were hard to construct. We speculate that it is more difficult for the models to latch on to the Serialization in this case, as the observed symptom was often that no generated text passed the de-serialization filter. On the positive side we observed that few shot tasks that rely on paraphrasing (such as those used Say It Again) were surprisingly easy to construct successfully.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "It is a common and intuitively plausible observation that the design of the Serialization is crucial to the performance of few shot learning with large LMs. Our Formulas can only be evaluated quali-4 https://www.gwern.net/GPT-3#transformer-poetry tatively and so we leave to future work the human studies that would be necessary to investigate this hypothesis. Our Magic Word experiment does offer the promise of a good test bed for Serialization design; even after considerable iteration we found the rate at which outputs pass both the de-Serialization filter to be surprisingly low given the relative simplicity of the task and the model's innate ability to generate coherent text continuation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Finally we note that our true goal is to empower artists with no technical training to imagine a Formula, construct it in our development mode, and then produce experiments as we have. In our current process, such an artist could indeed construct their Formula, but would at some point require a programmer to build it into an experiment, requiring e.g. a WYSIWG editor. While this was beyond the scope of our work, we did construct our system using Angular, a modern web development framework whose core premise is modularity, dependency injection, and reuse of components. Not only do our experiments make use of a small set of these reusable components for functionality like editable text fields and clickable suggestion lists, but also all text and Formulas are synchronized by a global pub-sub service with simple string keys.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We present STORY CENTAUR, a tool for the creation and tuning of text based few shot learning Formulas powered by large language models and several experiments using Formulas built with our tool that are focused around the topic of creative writing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The emergence of large language models has shaped the course of NLP research in the late 2010's but the question remains as to what, if any, is a viable use case for these models in their raw, un-finetuned, form. Additionally, while some claim that scaling these models is a viable path to Artificial General Intelligence, others disagree (Bender and Koller, 2020) , and learning what is easy, hard, and impossible for them is crucial this debate. The answers to these questions will undoubtedly reveal themselves in the coming years and we are particularly excited to see their impact on the fine arts. In particular, we see great potential in tools built with this technology when there is a human, in this case an artist, in the loop to complement the natural deficiencies of a simple but powerful text generator that lacks editorial control and responsibility.",
"cite_spans": [
{
"start": 339,
"end": 364,
"text": "(Bender and Koller, 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Innovative applications of natural language processing and digital media in theatre and performing arts. In Proceedings of the ENTRENOVA-ENTerprise REsearch InNOVAtion Conference, volume 6, pages 84-96.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. CoRR, abs/1905.00537. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "source code: https://chonger.github.io/centaur/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://collection.eliterature.org/ 3 Case (2018) also explains this term's etymology as it is central to our design that the artist be given the tools to create and iterate on NLP models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Story realization: Expanding plot events into sentences",
"authors": [
{
"first": "Prithviraj",
"middle": [],
"last": "Ammanabrolu",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Tien",
"suffix": ""
},
{
"first": "Wesley",
"middle": [],
"last": "Cheung",
"suffix": ""
},
{
"first": "Zhaochen",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lara",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"O"
],
"last": "Martin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Riedl",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prithviraj Ammanabrolu, Ethan Tien, Wesley Cheung, Zhaochen Luo, William Ma, Lara J Martin, and Mark O Riedl. Story realization: Expanding plot events into sentences.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "On the dangers of stochastic parrots: Can language models be too big",
"authors": [
{
"first": "M",
"middle": [],
"last": "Emily",
"suffix": ""
},
{
"first": "Timnit",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "Angelina",
"middle": [],
"last": "Gebru",
"suffix": ""
},
{
"first": "Shmargaret",
"middle": [],
"last": "Mcmillan-Major",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shmitchell",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency; Association for Computing Machinery",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily M Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency; As- sociation for Computing Machinery: New York, NY, USA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Climbing towards NLU: On meaning, form, and understanding in the age of data",
"authors": [
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5185--5198",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.463"
]
},
"num": null,
"urls": [],
"raw_text": "Emily M. Bender and Alexander Koller. 2020. Climb- ing towards NLU: On meaning, form, and under- standing in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 5185-5198, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Language models are few-shot learners",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Tom B Brown",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Ryder",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Subbiah",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Shyam",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Askell",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.14165"
]
},
"num": null,
"urls": [],
"raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "How to become a centaur",
"authors": [
{
"first": "Nicky",
"middle": [],
"last": "Case",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Design and Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicky Case. 2018. How to become a centaur. Journal of Design and Science.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Towards coherent and cohesive long-form text generation",
"authors": [
{
"first": "Sang",
"middle": [],
"last": "Woon",
"suffix": ""
},
{
"first": "Pengchuan",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiujun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Mengdi",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.00511"
]
},
"num": null,
"urls": [],
"raw_text": "Woon Sang Cho, Pengchuan Zhang, Yizhe Zhang, Xi- ujun Li, Michel Galley, Chris Brockett, Mengdi Wang, and Jianfeng Gao. 2018. Towards coher- ent and cohesive long-form text generation. arXiv preprint arXiv:1811.00511.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Hierarchical neural story generation",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.04833"
]
},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. arXiv preprint arXiv:1805.04833.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Hafez: an interactive poetry generation system",
"authors": [
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jay",
"middle": [],
"last": "Priyadarshi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "43--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marjan Ghazvininejad, Xing Shi, Jay Priyadarshi, and Kevin Knight. 2017. Hafez: an interactive poetry generation system. In Proceedings of ACL 2017, System Demonstrations, pages 43-48.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Unsupervised hierarchical story infilling",
"authors": [
{
"first": "Daphne",
"middle": [],
"last": "Ippolito",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Eck",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the First Workshop on Narrative Understanding",
"volume": "",
"issue": "",
"pages": "37--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daphne Ippolito, David Grangier, Chris Callison- Burch, and Douglas Eck. 2019. Unsupervised hier- archical story infilling. In Proceedings of the First Workshop on Narrative Understanding, pages 37- 43.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Cozy mystery construction kit: Prototyping toward an ai-assisted collaborative storytelling mystery game",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Kreminski",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Acharya",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Junius",
"suffix": ""
},
{
"first": "Elisabeth",
"middle": [],
"last": "Oliver",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Compton",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Dickinson",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Focht",
"suffix": ""
},
{
"first": "Stacey",
"middle": [],
"last": "Mason",
"suffix": ""
},
{
"first": "Stella",
"middle": [],
"last": "Mazeika",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Wardrip-Fruin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 14th International Conference on the Foundations of Digital Games",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Max Kreminski, Devi Acharya, Nick Junius, Elisabeth Oliver, Kate Compton, Melanie Dickinson, Cyril Focht, Stacey Mason, Stella Mazeika, and Noah Wardrip-Fruin. 2019. Cozy mystery construction kit: Prototyping toward an ai-assisted collaborative sto- rytelling mystery game. In Proceedings of the 14th International Conference on the Foundations of Dig- ital Games, pages 1-9.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Alon Ilsar, Alison Pease, and Simon Colton. 2020. Explainable computational creativity",
"authors": [
{
"first": "Maria",
"middle": [
"Teresa"
],
"last": "Llano",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Inverno",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Yee-King",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Mccormack",
"suffix": ""
}
],
"year": null,
"venue": "Proc. ICCC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Teresa Llano, Mark d'Inverno, Matthew Yee- King, Jon McCormack, Alon Ilsar, Alison Pease, and Simon Colton. 2020. Explainable computa- tional creativity. In Proc. ICCC.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improvisational computational storytelling in open worlds",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lara",
"suffix": ""
},
{
"first": "Brent",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"O"
],
"last": "Harrison",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Riedl",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Interactive Digital Storytelling",
"volume": "",
"issue": "",
"pages": "73--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lara J Martin, Brent Harrison, and Mark O Riedl. 2016. Improvisational computational storytelling in open worlds. In International Conference on Interactive Digital Storytelling, pages 73-84. Springer.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improvised theatre alongside artificial intelligences",
"authors": [
{
"first": "Kory",
"middle": [],
"last": "Mathewson",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Mirowski",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment",
"volume": "13",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kory Mathewson and Piotr Mirowski. 2017. Impro- vised theatre alongside artificial intelligences. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, volume 13.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "26",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. Advances in neural information processing sys- tems, 26:3111-3119.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Human improvised theatre augmented with artificial intelligence",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Mirowski",
"suffix": ""
},
{
"first": "Kory",
"middle": [
"Wallace"
],
"last": "Mathewson",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 on Creativity and Cognition",
"volume": "",
"issue": "",
"pages": "527--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Mirowski and Kory Wallace Mathewson. 2019. Human improvised theatre augmented with artificial intelligence. In Proceedings of the 2019 on Creativ- ity and Cognition, pages 527-530.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "2018. I lead, you help but only with enough details: Understanding user experience of co-creation with artificial intelligence",
"authors": [
{
"first": "Changhoon",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "Jungwoo",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jinhan",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Seonghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sungwoo",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Bongwon",
"middle": [],
"last": "Suh",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Changhoon Oh, Jungwoo Song, Jinhan Choi, Seonghyeon Kim, Sungwoo Lee, and Bongwon Suh. 2018. I lead, you help but only with enough details: Understanding user experience of co-creation with artificial intelligence. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1-13.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning to generate reviews and discovering sentiment",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.01444"
]
},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Textsettr: Label-free text style extraction and tunable targeted restyling",
"authors": [
{
"first": "Parker",
"middle": [],
"last": "Riley",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Mandy",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Uthus",
"suffix": ""
},
{
"first": "Zarana",
"middle": [],
"last": "Parekh",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.03802"
]
},
"num": null,
"urls": [],
"raw_text": "Parker Riley, Noah Constant, Mandy Guo, Girish Kumar, David Uthus, and Zarana Parekh. 2020. Textsettr: Label-free text style extraction and tunable targeted restyling. arXiv preprint arXiv:2010.03802.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Writing with the machine: Gpt-2 and text generation",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Sloan",
"suffix": ""
}
],
"year": 2019,
"venue": "Roguelike Celebration",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Sloan. 2019. Writing with the machine: Gpt-2 and text generation. In Roguelike Celebration.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Maximos Kaliakatsos-Papakostas, Vassilis Katsouros, Apostolos Kastritsis, Konstantinos Christantonis, Konstantinos Diamantaras, and Michael Loupis",
"authors": [
{
"first": "Charalampos",
"middle": [],
"last": "Tsioustas",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Petratou",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charalampos Tsioustas, Daphne Petratou, Maximos Kaliakatsos-Papakostas, Vassilis Katsouros, Apos- tolos Kastritsis, Konstantinos Christantonis, Kon- stantinos Diamantaras, and Michael Loupis. 2020.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "STORY CENTAUR's Formula Development User Interface.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Formula I/O for Magic Word.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "Formula I/O for Say It Again's CUSTOM Formula.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "Story Spine's two formulas for spine continuation (top) and spine colorizing (bottom).",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF4": {
"text": "The eight character creation points used in Character Maker, with examples.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF5": {
"text": "A Screenshot of the Magic Word Formula and Experiment.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF6": {
"text": "Say It Again in the style of a Shakespearean character using 5 Few Shot Examples.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF7": {
"text": "Say It Again in the style of Bill and Ted using 5 Few Shot Examples.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF8": {
"text": "Say It Again in the style of Data from Star Trek using 5 Few Shot Examples of style transfer, but no examples of Data's actual style.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF9": {
"text": "Story Spine screenshots. In the top screenshot, a five segment spine has been constructed, but only two spine segments have been colorized. The bottom image shows the result of colorizing the final three segments.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF10": {
"text": "Character Maker, before and after generation of the Tactics field. In this case, generation was performed by constructing a few shot Formula dynamically with Super-Objective, Sub-Objective, Scene Objective, and Backstory as input, and Tactics as output, using 4 examples of full templates to create the Data.",
"type_str": "figure",
"num": null,
"uris": null
}
}
}
}