paper_id
stringlengths
10
10
yes_no
bool
2 classes
paper_index
int64
0
519
evidence
stringlengths
0
37.7k
question
stringlengths
4
11.7k
answer
stringlengths
1
26k
null
false
null
What popular wine varietal is produced in Washington?
Washington is well known for its white wine, Riesling, and red wine, Cabernet Sauvignon.
null
false
null
Amazon Prime Video, also known simply as Prime Video, is an American subscription video on-demand over-the-top streaming and rental service of Amazon offered as a standalone service or as part of Amazon's Prime subscription. The service primarily distributes films and television series produced by Amazon Studios and MGM Holdings or licensed to Amazon, as Amazon Originals, with the service also hosting content from other providers, content add-ons, live sporting events, and video rental and purchasing services. Operating worldwide, the service may require a full Prime subscription to be accessed. In countries such as the United States, United Kingdom, and Germany, the service can be accessed without a full Prime subscription, whereas in Australia, Canada, France, India, Turkey, and Italy, it can only be accessed through a dedicated website. Prime Video additionally offers a content add-on service in the form of channels, called Amazon Channels, or Prime Video Channels, which allow users to subscribe to additional video subscription services from other content providers within Prime Video. Launched on September 7, 2006, as Amazon Unbox in the United States, the service grew with an expanding library, and added the Prime Video membership upon the development of the Prime subscription. It was then renamed as Amazon Instant Video on Demand. After acquiring the UK-based streaming and DVD-by-mail service LoveFilm in 2011, Prime Video was added to the Prime subscription in the United Kingdom, Germany and Austria in 2014, available on a monthly subscription of £/€8.99 per month, continuing the plan of LoveFilm Instant. The service was previously available in Norway, Denmark, and Sweden in 2012, but was discontinued in 2013. On April 18, 2016, Amazon split Prime Video from Amazon Prime in the US for $8.99 per month.
tell me how much Amazon Prime would cost
Depends on the region, Amazon Prime in US cost USD 8.99
null
false
null
Give me a list of pizza restaurants in San Francisco
There are a lot of different types of pizza restaurants in San Francisco. For more casual pizza restaurants, try any Round Table, Dominos, Pizza Hut, or for a more local experience, Golden Boy Pizza. If you're interested in sit down dining pizza options, try Del Popolo, Tony's Slice House, or Il Casaro in North Beach.
1702.02363
true
null
In this work, we publish TWNERTC dataset in which named entities and categories of sentences have been automatically annotated. We use Turkish Wikipedia dumps as the text source and Freebase to construct a large-scale gazetteers to map fine-grained types to entities. To overcome noisy and ambiguous data, we leverage domain information which is given by Freebase and develop domain-independent and domain-dependent methodologies. All versions of datasets can be downloaded from our project web-page. Our main contributions are (1) the publication of Turkish corpus for coarse-grained and fine-grained NER, and TC research, (2) six different versions of corpus according to noise reduction methodology and entity types, (3) an analysis of the corpus and (4) benchmark comparisons for NER and TC tasks against human annotators. To the best of our knowledge, these datasets are the largest datasets available for Turkish NER ad TC tasks. Our main contributions are (1) the publication of Turkish corpus for coarse-grained and fine-grained NER, and TC research, (2) six different versions of corpus according to noise reduction methodology and entity types, (3) an analysis of the corpus and (4) benchmark comparisons for NER and TC tasks against human annotators.
Did they experiment with the dataset on some tasks?
Yes.
null
false
null
What is a capital gain in finance?
You make a capital gain if you sell or dispose of a long−term asset (such as a building) for more than it cost you.
1812.02802
false
null
The end-to-end system (prefix E2E) uses the DNN topology depicted in Figure FIGREF8 . We present results with 3 distinct size configurations (infixes 700K, 318K, and 40K) each representing the number of approximate parameters, and 2 types of training recipes (suffixes 1stage and 2stage) corresponding to end-to-end and encoder+decoder respectively, as described in UID7 . The input to all DNNs consist of a sequence with INLINEFORM0 frames of left and INLINEFORM1 frames of right context; each with a stride of INLINEFORM2 . More specifically, the E2E_700K model uses INLINEFORM3 nodes in the first 4 SVDF layers, each with a memory INLINEFORM4 , with intermediate bottleneck layers each of size 64; the following 3 SVDF layers have INLINEFORM5 nodes, each with a memory INLINEFORM6 . This model performs 350K multiply-accumulate operations per inference (every 20ms of streaming audio). The E2E_318K model uses INLINEFORM7 nodes in the first 4 SVDF layers, each with a memory INLINEFORM8 , with intermediate bottleneck layers each of size 64; the remainder layers are the same as E2E_700K. This model performs 159K multiply-accumulate operations per inference. Finally, the E2E_40K model uses INLINEFORM9 nodes in the first 4 SVDF layers, each with a memory INLINEFORM10 , with intermediate bottleneck layers each of size 32; the remainder layers are the same as the other two models. This model performs 20K multiply-accumulate operations per inference. The end-to-end system (prefix E2E) uses the DNN topology depicted in Figure FIGREF8 . We present results with 3 distinct size configurations (infixes 700K, 318K, and 40K) each representing the number of approximate parameters, and 2 types of training recipes (suffixes 1stage and 2stage) corresponding to end-to-end and encoder+decoder respectively, as described in UID7 .
How many parameters does the presented model have?
The answers are shown as follows: * (infixes 700K, 318K, and 40K) each representing the number of approximate parameters
null
false
null
How is the game "ultimate frisbee" different from "frisbee golf"?
These two games are both played with a frisbee (or "disc"), which is a plastic disc that people throw through the air like a ball. The difference is that ultimate frisbee is a team sport that combines several of the rules of football, basketball and soccer. Like football, a team takes possession of the frisbee and tries to move it down field and into an end zone. Like basketball, once you are in possession of the frisbee, you cannot move your pivot foot. However, there is no "dribbling" with a frisbee so if a player has possession of the frisbee outside of their end zone, they have to pass it before they can move their pivot foot. Ultimate frisbee is played on a soccer field and the player movement looks most like that sport. Frisbee golf, on the other hand, follows the rules of golf, except that the player throws a frisbee to the hole instead of hitting a ball with clubs. Some people play the sport on dedicated frisbee golf courses, using a set of specialized discs of various weights and carrying them in a bag from hole to hole like a golfer would do with their set of clubs. Others simply take a standard 175-gram frisbee and make up a course as they go. For example, two players might point to a tree on the other side of a park and compete by seeing who can hit the tree with a frisbee in the fewest consecutive attempts.
null
false
null
What is a fake news?
There is no clear definition of fake news. However in general a fake news is a false or misleading information spread across media. Fake news in social media may be used to attract visualisations and likes so the person posting gets returns in higher visibility and popularity. Sometimes, fake news may also be used in a more architected manner in marketing campaigns with the aim of damaging reputations of brands or persons or interfere with mass opinion in democratic countries for example during elections
null
false
null
Categorize the following Taylor Swift songs based on what album they are in: The Archer, tolerate it, Lover, Love Story (Taylor's Version), Bejeweled, Don't Blame Me, 22 (Taylor's version), Look What You Made Me Do, Lavender Haze, willow, Cruel Summer
The following songs are in Taylor Swift's album Lover: The Archer, Lover, Cruel Summer. The following song is in the album Fearless (Taylor's Version): Love Story (Taylor's Version) The following songs are in the album evermore: tolerate it, willow The following songs are in the album Reputation: Don't Blame Me, Look What You Made Me Do The following song is in the album Red (Taylor's Version): 22 (Taylor's version) The following songs are in the album Midnights: Bejeweled, Lavender Haze
null
false
null
What are three ways to camp.
Tent camping, car camping, cabin camping
null
false
173
Encoding of words is perhaps the most important step towards a successful end-to-end natural language processing application. Although word embeddings have been shown to provide benefit to such models, they commonly treat words as the smallest meaning bearing unit and assume that each word type has its own vector representation. This assumption has two major shortcomings especially for languages with rich morphology: (1) inability to handle unseen or out-of-vocabulary (OOV) word-forms (2) inability to exploit the regularities among word parts. The limitations of word embeddings are particularly pronounced in sentence-level semantic tasks, especially in languages where word parts play a crucial role. Consider the Turkish sentences “Köy+lü-ler (villagers) şehr+e (to town) geldi (came)” and “Sendika+lı-lar (union members) meclis+e (to council) geldi (came)”. Here the stems köy (village) and sendika (union) function similarly in semantic terms with respect to the verb come (as the origin of the agents of the verb), where şehir (town) and meclis (council) both function as the end point. These semantic similarities are determined by the common word parts shown in bold. However ortographic similarity does not always correspond to semantic similarity. For instance the ortographically similar words knight and night have large semantic differences. Therefore, for a successful semantic application, the model should be able to capture both the regularities, i.e, morphological tags and the irregularities, i.e, lemmas of the word. Morphological analysis already provides the aforementioned information about the words. However access to useful morphological features may be problematic due to software licensing issues, lack of robust morphological analyzers and high ambiguity among analyses. Character-level models (CLM), being a cheaper and accessible alternative to morphology, have been reported as performing competitively on various NLP tasks BIBREF0 , BIBREF1 , BIBREF2 . However the extent to which these tasks depend on morphology is small; and their relation to semantics is weak. Hence, little is known on their true ability to reveal the underlying morphological structure of a word and their semantic capabilities. Furthermore, their behaviour across languages from different families; and their limitations and strengths such as handling of long-range dependencies, reaction to model complexity or performance on out-of-domain data are unknown. Analyzing such issues is a key to fully understanding the character-level models. To achieve this, we perform a case study on semantic role labeling (SRL), a sentence-level semantic analysis task that aims to identify predicate-argument structures and assign meaningful labels to them as follows: $[$ Villagers $]$ comers came $[$ to town $]$ end point We use a simple method based on bidirectional LSTMs to train three types of base semantic role labelers that employ (1) words (2) characters and character sequences and (3) gold morphological analysis. The gold morphology serves as the upper bound for us to compare and analyze the performances of character-level models on languages of varying morphological typologies. We carry out an exhaustive error analysis for each language type and analyze the strengths and limitations of character-level models compared to morphology. In regard to the diversity hypothesis which states that diversity of systems in ensembles lead to further improvement, we combine character and morphology-level models and measure the performance of the ensemble to better understand how similar they are. We experiment with several languages with varying degrees of morphological richness and typology: Turkish, Finnish, Czech, German, Spanish, Catalan and English. Our experiments and analysis reveal insights such as: We experiment with several languages with varying degrees of morphological richness and typology: Turkish, Finnish, Czech, German, Spanish, Catalan and English.
What languages do the authors experiment with?
Turkish, Finnish, Czech, German, Spanish, Catalan and English.
null
false
null
What is the difference between tennis shoes and sandals?
Tennis shoes are a form of footware that fully covers and supports your feet. These shoes are designed to endure a lot of wear and tear, as they are often used for physical activity and general daily use. Tennis shoes typically have full heal support that limits the movement of your foot within the shoe as you wear them. Tennis shoes also have laces, which help the wearer to tighten the shoe fabric around your foot, providing further support. Sandals are a type of footware that does not fully cover or support your foot. Sandals typically have some sort of wrap or toe fixture that helps to keep the shoe on the wearer as they walk. This type of shoe is mostly used for short-term wear, at the beach, or at the pool, as they are extremely convenient to put on and take off.
null
false
null
Increased money supply (i.e. money printing)
Demand-pull inflation. Cost-push inflation. Increased money supply (i.e. money printing) Currency devaluation. Government policies and regulations.
null
false
null
The Boston Marathon is an annual marathon race hosted by several cities and towns in greater Boston in eastern Massachusetts, United States. It is traditionally held on Patriots' Day, the third Monday of April. Begun in 1897, the event was inspired by the success of the first marathon competition in the 1896 Summer Olympics. The Boston Marathon is the world's oldest annual marathon and ranks as one of the world's best-known road racing events. It is one of six World Marathon Majors. Its course runs from Hopkinton in southern Middlesex County to Copley Square in Boston. The Boston Athletic Association (B.A.A.) has organized this event annually since 1897, except for 2020 when it was canceled due to the COVID-19 pandemic. In 2021, it was held later, in October. The race has been managed by DMSE Sports, Inc., since 1988. Amateur and professional runners from all over the world compete in the Boston Marathon each year, braving the hilly Massachusetts terrain and varying weather to take part in the race. The event attracts 500,000 spectators each year, making it New England's most widely viewed sporting event. Starting with just 15 participants in 1897, the event now attracts an average of about 30,000 registered participants each year, with 30,251 people entering in 2015. The Centennial Boston Marathon in 1996 established a record as the world's largest marathon with 38,708 entrants, 36,748 starters, and 35,868 finishers.
Using examples taken from the text give me a summary of why the Boston Marathon is well known.
The Boston Marathon, which began in 1897, is the world's oldest annual marathon inspired by the 1896 Summer Olympics. The Boston Marathon is one of the six World Marathon Majors. The race has been held annually since 1897 except for 2020, when the race was canceled due to the COVID-19 pandemic. The marathon is traditionally held on Patriot's Day, the third Monday of April. It attracts 500,000 spectators and 30,000 registered participants each year making it one of the most well known road racing events.
2003.08897
false
null
By combining both NSA and GSA, we obtain an enhanced SA module. We then construct our Normalized and Geometry-aware Self-Attention Network, namely NG-SAN, by replacing the vanilla SA modules in the encoder of the self-attention network with the proposed one. Extensive experiments on MS-COCO validates the effectiveness of our proposals. In particular, our NG-SAN establishes a new state-of-the-art on the MS-COCO evaluation sever, improving the best single-model result in terms of CIDEr from 125.5 to 128.6. To demonstrate the generality of NSA, we further present video captioning, machine translation, and visual question answering experiments on the VATEX, WMT 2014 English-to-German, and VQA-v2 datasets, respectively. On top of the strong Transformer-based baselines, our methods can consistently increase accuracies on all tasks at a negligible extra computational cost. To demonstrate the generality of NSA, we further present video captioning, machine translation, and visual question answering experiments on the VATEX, WMT 2014 English-to-German, and VQA-v2 datasets, respectively.
What datasets are used for experiments on three other tasks?
The answers are shown as follows: * VATEX, WMT 2014 English-to-German, and VQA-v2 datasets
null
false
null
In the United States of America, which state is the largest in size?
In the Unites States of America, Alaska is the largest state in physical size. Although not the most populated state is is 663,300 in square miles. The next largest state is Texas followed by California.
null
false
null
What are the benefits of using the ReLU activation function over the classic sigmoid function in neural networks?
1. Gradient vanishing problem: The max value of the derivative of sigmoid function is 0.25, which can result in the total gradients becoming very small during back propagation. In contrast ReLU has a constant gradient 1 for positive inputs. 2. Compute efficiency: ReLU can be computed using max(0, x), more efficient than exponential operations in sigmoid function. 3. Convergence speed: it's also reported that using ReLU as activation function in neural networks converges faster than using sigmoid function.
null
false
null
I don't like how my coffee tastes. What are some reasons that the coffee I brew isn't good?
- Using too much or too little coffee can make the coffee too strong or weak. - Grinding coffee too finely can make it too strong and bitter, or too coarsely can make it weak. Use the right grind for your brewing method; finer for drip coffee, coarser for french press. - The quality of the coffee beans affects the flavor of the brewed coffee. Fresh high-quality beans can improve the coffee's flavor. - Fresh-ground coffee will typically make better brewed coffee than previously-ground coffee - The temperature and quality of the water can affect the coffee's flavor. Hard water can negatively impact the coffee's flavor.
null
false
null
Classify each of the following sports terminologies by sport: Earned Run Average, WHIP, Bicycle Kick, Touchdown, Safety, Quality Start, Runs, RBI, Stolen Base, Batting Average, Inning, Strikeout, Shortstop, Striker, Sweeper, Stopper, Relief Pitcher, Nutmeg, Slide Tackle, Extra Point, Barrel Rate, Midfielder, Yellow Card, Red Card, Corner Kick, Goal Kick, Set Piece, Direct Free Kick, FIFA, MLS, Concacaf, Half Volley, Full Volley, Base on Balls, Batter's Box, Mound, Breaking Ball, Slider, Designated Hitter, Knuckle Ball, No Hitter, Triple Play, Double Play, Wild Pitch, Linebacker, Running Back, Quarterback, Wide Receiver, Tight End, Chop Block
Soccer: Bicycle Kick, Striker, Sweeper, Stopper, Nutmeg, Slide Tackle, Midfielder, Yelow Card, Red Card, Corner Kick, Goal Kick, Set Piece, Direct Free Kick, FIFA, Concacaf, MLS, Half Volley, Full Volley Baseball: Earned Run Average, WHIP, Quality Start, Runs, RBI, Stolen Base, Batting Average, Inning, Strikeout, Shortstop, Relief Pitcher, Barrel Rate, Base on Balls, Batter's Box, Mound, Breaking Ball, Slider, Designated Hitter, Knuckle Ball, No Hitter, Triple Play, Double Play, Wild Pitch Football: Touchdown, Safety, Extra Point, Linebacker, Running Back, Quarterback, Wide Receiver, Tight End, Chop Block
null
false
null
Hermann Fernau (born 1883 or 1884) was a German lawyer, writer, journalist and pacifist.
Replace the word "lawyer" with "judge" and the word "pacifist" with "peace lover"
Hermann Fernau (born 1883 or 1884) was a German judge, writer, journalist and peace lover.
null
false
32
The shared task had two phases: In the development phase, the participants were provided labeled training and development datasets; in the testing phase, testing input was further provided. The participants tried to achieve the best performance on the development set. A live leaderboard kept track of the submissions. The test set was released and the participants had few days to make final predictions. In phase 2, no immediate feedback on the submissions was provided. The winner was determined based on the performance on the test set. The shared task had two phases: In the development phase, the participants were provided labeled training and development datasets; in the testing phase, testing input was further provided.
Which two phases does the shared task have?
The development phase and the testing phase.
null
false
null
Charles Wing Krafft (September 19, 1947 – June 12, 2020) was an American painter and ceramicist whose later work incorporated traditional ceramic decorative styles to produce works commemorating modern disasters. In 1998, he was called "the dark angel of Seattle art" by the art critic of the Seattle Post-Intelligencer. In early 2013 it was revealed that he participated in white nationalist and Holocaust denial websites, which led to a re-evaluation of his artwork. Early life Charles Krafft was born in Seattle in 1947, and grew up near the Seattle Art Museum, which he frequented, leading to his initial interest in art. In his teenage years, he became interested in the writings of Jack Kerouac and the artwork of Von Dutch, a prominent custom car designer, gunsmith and explosives specialist. He later became fascinated by the art of Morris Graves and the Northwest School. He has named Von Dutch and Graves as his "greatest inspirations". He also spent time with Guy Anderson, also of the Northwest School. He soon moved into a cabin in an artists' commune in Fishtown, near La Conner, Washington, where he lived for 12 years. Initially he went there to attempt to learn to meditate. Although this failed, he first began painting there.
Given a reference text about Charles Wing Krafft, tell me where he was born and his occupation.
Charles Wing Krafft was born in Seattle and was a painter and ceramicist.
null
false
null
Ruth Handler watched her daughter Barbara play with paper dolls, and noticed that she often enjoyed giving them adult roles. At the time, most children's toy dolls were representations of infants. Realizing that there could be a gap in the market, Handler suggested the idea of an adult-bodied doll to her husband Elliot, a co-founder of the Mattel toy company. He was unenthusiastic about the idea, as were Mattel's directors. During a trip to Europe in 1956 with her children Barbara and Kenneth, Ruth Handler came across a German toy doll called Bild Lilli. The adult-figured doll was exactly what Handler had in mind, so she purchased three of them. She gave one to her daughter and took the others back to Mattel. The Lilli doll was based on a popular character appearing in a comic strip drawn by Reinhard Beuthin for the newspaper Bild. Lilli was a blonde bombshell, a working girl who knew what she wanted and was not above using men to get it. The Lilli doll was first sold in Germany in 1955, and although it was initially sold to adults, it became popular with children who enjoyed dressing her up in outfits that were available separately.
Based on this text, what was Ruth Handler's inspiration for making the Barbie Doll?
Ruth Handler was inspired by the German Bild Lilli doll
null
false
null
The abacus (plural abaci or abacuses), also called a counting frame, is a calculating tool which has been used since ancient times. It was used in the ancient Near East, Europe, China, and Russia, millennia before the adoption of the Hindu-Arabic numeral system. The exact origin of the abacus has not yet emerged. It consists of rows of movable beads, or similar objects, strung on a wire. They represent digits. One of the two numbers is set up, and the beads are manipulated to perform an operation such as addition, or even a square or cubic root. In their earliest designs, the rows of beads could be loose on a flat surface or sliding in grooves. Later the beads were made to slide on rods and built into a frame, allowing faster manipulation. Abacuses are still made, often as a bamboo frame with beads sliding on wires. In the ancient world, particularly before the introduction of positional notation, abacuses were a practical calculating tool. The abacus is still used to teach the fundamentals of mathematics to some children, for example, in Russia.
What is an Abacus?
The Abacus is calculating tool using movable beads often made on a bamboo frame
null
false
322
RDF BIBREF18 uses a graph-based data model for representing knowledge. Statements in RDF are expressed as so-called triples of the form (subject, predicate, object). RDF subjects and predicates are IRI and objects are either IRI or literals. RDF literals always have a datatype that defines its possible values. A predicate denotes a property and can also be seen as a binary relation taking subject and object as arguments. For example, the following triple expresses that Albert Einstein was born in Ulm: The lexicalization of RDF triples must be able to deal with resources, classes, properties and literals. Classes and resources The lexicalization of classes and resources is carried out as follows: Given a URI $u$ we ask for the English label of $u$ using a SPARQL query. If such a label does not exist, we use either the fragment of $u$ (the string after #) if it exists, else the string after the last occurrence of /. Finally this NL representation is realized as a noun phrase, and in the case of classes is also pluralized. As an example, :Person is realized as people (its label). Properties The lexicalization of properties relies on the insight that most property labels are either nouns or verbs. While the mapping of a particular property p can be unambiguous, some property labels are not as easy to categorize. For examples, the label crosses can either be the plural form of the noun cross or the third person singular present form of the verb to cross. To automatically determine which realization to use, we relied on the insight that the first and last word of a property label are often the key to determining the type of the property: properties whose label begins with a verb (resp. noun or gerund) are most to be realized as verbs (resp. nouns). We devised a set of rules to capture this behavior, which we omit due to space restrictions. In some cases (such as crosses) none of the rules applied. In these cases, we compare the probability of $P(p|\texttt {noun})$ and $P(p|\texttt {verb})$ by measuring where $\text{\em synset}(p)$ is the set of all synsets of $p$, $\text{\em synset}(p|X)$ is the set of all synsets of $p$ that are of the syntactic class $X \in \lbrace \texttt {noun},\texttt {verb}\rbrace $ and $f(t)$ is the frequency of use of $p$ in the sense of the synset $t$ according to WordNet. For we choose to realize p as a noun; else we realized it as a verb. For $\theta =1$, for example, dbo:crosses is realized as a verb. Literals Literals in an RDF graph usually consist of a lexical form LF and a datatype IRI DT, represented as "LF"⌃⌃<DT>. Optionally, if the datatype is rdf:langString, a non-empty language tag is specified and the literal is denoted as language-tagged string. The realization of language-tagged strings is done by using simply the lexical form, while omitting the language tag. For example, "Albert Einstein"@en is realized as Albert Einstein. For other types of literals, we further differentiate between built-in and user-defined datatypes. For the former, we also use the lexical form, e.g. "123"⌃⌃xsd:int $\Rightarrow $ 123, while the latter are processed by using the literal value with its representation of the datatype IRI, e.g., "123"⌃⌃dt:squareKilometre as 123 square kilometres. The realization $\rho $ of a triple (s p o) depends mostly on the verbalization of the predicate p. If p can be realized as a noun phrase, then a possessive clause can be used to express the semantics of (s p o), more formally $\rho $(s p o) $\Rightarrow $ poss($\rho $(p),$\rho $(s))$\,\wedge \,$subj(BE,$\rho $(p))$\,\\ \wedge \,$dobj(BE,$\rho $(o)) For example, if $\rho (\texttt {p})$ is a relational noun like birth place e.g. in the triple (:Albert_Einstein :birthPlace :Ulm), then the verbalization is Albert Einstein's birth place is Ulm. Note that BE stands for the verb “to be”. In case p's realization is a verb, then the triple can be verbalized as follows: $\rho $(s p o) $\Rightarrow $ subj($\rho $(p),$\rho $(s))$\,\wedge \,$dobj($\rho $(p),$\rho $(o)) For example, in (:Albert_Einstein :influenced :Nathan_Rosen) $\rho (\texttt {p})$ is the verb influenced, thus, the verbalization is Albert Einstein influenced Nathan Rosen. The same procedure of generating a single triple can be applied for the generation of each triple in a set of triples. However, the NL output would contain redundant information and consequently sound very artificial. Thus, the goal is to transform the generated description to sound more natural. To this end, we focus on two types of transformation rules (cf. BIBREF21): ordering and clustering and grouping. In the following, we describe the transformation rules we employ in more detail. Note that clustering and ordering (SECREF18) is applied before grouping (SECREF26). We process the input trees in descending order with respect to the frequency of the variables they contain, starting with the projection variables and only after that turning to other variables. As an example, consider the following triples about two of the most known people in the world: The five triples are verbalized as given in UNKREF20–UNKREF24. Clustering and ordering first take all sentences containing the subject :Albert_Einstein, i.e. UNKREF21 –UNKREF23, which are ordered such that copulative sentences (such as Albert Einstein is a scientist) come before other sentences, and then takes all sentences containing the remaining subject :William_Shakespeare in UNKREF20 and UNKREF24 resulting in a sequence of sentences as in UNKREF25. William Shakespeare is a writer. Albert Einstein's birth place is Ulm. Albert Einstein's death place is Princeton. Albert Einstein is a scientist. William Shakespeare's death date is 23 April 1616. Albert Einstein is a scientist. Albert Einstein's birth place is Ulm. Albert Einstein's death place is Princeton. William Shakespeare's is a writer. William Shakespeare's death date is 23 April 1616. dalianishovy describe grouping as a process “collecting clauses with common elements and then collapsing the common elements”. The common elements are usually subject noun phrases and verb phrases (verbs together with object noun phrases), leading to subject grouping and object grouping. To maximize the grouping effects, we collapse common prefixes and suffixes of sentences, irrespective of whether they are full subject noun phrases or complete verb phrases. In the following we use $X_1,X_2,$...$X_N$ as variables for the root nodes of the input sentences and $Y$ as variable for the root node of the output sentence. Furthermore, we abbreviate a subject $\texttt {subj}(X_i,s_i)$ as $\texttt {s}_i$, an object $\texttt {dobj}(X_i,o_i)$ as $\texttt {o}_i$, and a verb $\texttt {root}(\text{\em ROOT}_i,v_i)$ as $\texttt {v}_i$. Subject grouping collapses the predicates (i.e. verb and object) of two sentences if their subjects are the same, as specified in UNKREF27 (abbreviations as above). $\rho (\texttt {s}_1)=\rho (\texttt {s}_2)\wedge \texttt {cc}(\texttt {v}_1,\text{\em coord})$ $\Rightarrow \texttt {root}(Y,\texttt {coord}(\texttt {v}_1,\texttt {v}_2))\wedge \, \texttt {subj}(\texttt {v}_1,\texttt {s}_1)\wedge \texttt {dobj}(\texttt {v}_1,\texttt {o}_1)\wedge \texttt {subj}(\texttt {v}_2,\texttt {s}_1)\wedge \texttt {dobj}(\texttt {v}_1,\texttt {o}_2) $ An example are the sentences given in UNKREF28, which share the subject Albert Einstein and thus can be collaped into a single sentence. Albert Einstein is a scientist and Albert Einstein is known for general relativity. $\Rightarrow $ Albert Einstein is a scientist and known for general relativity. Object grouping collapses the subjects of two sentences if the realizations of the verbs and objects of the sentences are the same, where the $\text{\em coord}\in \lbrace \text{\sf and},\text{\sf or}\rbrace $ is the coordination combining the input sentences $X_1$ and $X_2$, and $\texttt {coord}\in \lbrace \texttt {conj},\texttt {disj}\rbrace $ is the corresponding coordination combining the subjects. $\rho (\texttt {o}_1)=\rho (\texttt {o}_2)\wedge \rho (\texttt {v}_1)=\rho (\texttt {v}_2)\wedge \texttt {cc}(\texttt {v}_1,\text{\em coord})$ $\Rightarrow \texttt {root}(Y,\texttt {PLURAL}(\texttt {v}_1))\wedge \, \texttt {subj}(\texttt {v}_1,\texttt {coord}(\texttt {s}_1,\texttt {s}_2))\wedge \texttt {dobj}(\texttt {v}_1,\texttt {o}_1) $ For example, the sentences in UNKREF30 share their verb and object, thus they can be collapsed into a single sentence. Note that to this end the singular auxiliary was needs to be transformed into its plural form were. Benjamin Franklin was born in Boston. Leonard Nimoy was born in Boston. $\Rightarrow $ Benjamin Franklin and Leonard Nimoy were born in Boston. Statements in RDF are expressed as so-called triples of the form (subject, predicate, object).
How to express the statements in RDF?
Statements in RDF are expressed as so-called triples of the form (subject, predicate, object).
null
false
null
What are three most commonly sung lyric soprano arias?
Three most commonly sung lyric soprano arias? - "Quando m'en vo" from La Boheme by Puccini - "O mio babbino caro" from Gianni Schicchi by Puccini - "Deh! vieni non tardar" from Le Nozze di Figaro by Mozart
null
false
46
The used architecture of the QA model is briefly summarized below. Here we choose QANet BIBREF2 as the base model due to the following reasons: 1) it achieves the second best performance on SQuAD, and 2) since there are completely no recurrent networks in QANet, its training speed is 5x faster than BiDAF BIBREF17 when reaching the same performance on SQuAD. The network architecture is illustrated in Figure FIGREF2 . The left blocks and the right blocks form two QANets, each of which takes a document and a question as the input and outputs an answer. In QANet, firstly, an embedding encoder obtains word and character embeddings for each word in INLINEFORM0 or INLINEFORM1 and then models the temporal interactions between words and refines word vectors to contextualized word representations. All encoder blocks used in QANet are composed exclusively of depth-wise separable convolutions and self-attention. The intuition here is that convolution components can model local interactions and self-attention components focus on modeling global interactions. The context-query attention layer generates the question-document similarity matrix and computes the question-aware vector representations of the context words. After that, a model encoder layer containing seven encoder blocks captures the interactions among the context words conditioned on the question. Finally, the output layer predicts a start position and an end position in the document to extract the answer span from the document. Finally, the output layer predicts a start position and an end position in the document to extract the answer span from the document.
What's the function of the output layer?
It predicts a start position and an end position in the document to extract the answer span from the document.
null
false
null
Classify each of the following as either solid, liquid or gas: fart, urine, poop, toilet paper, toilet, sink, water, bar of soap, underwear, belt, steam
The solids include poop, toilet paper, the toilet, the bar of soap, underwear and belt. The liquids include urine and water. The gases include the fart and steam.
null
false
null
Write a manifesto for the "Be IRL" movement. The movement advocates being present to your surroundings and fellow humans, vs being immersed in the virtual world created by our smartphones.
New "technology" comes with incredible benefits, but also incredible long term harms. Think of plentiful calories (no fear of starvation) vs the obesity epidemic. Now think, constant dopamine vs incredible lack of connection to our environment, no new friends and a loneliness epidemic. We see it all around us like some dystopian movie. Everyone is looking down at their phones, their eyes and ears oblivious to everything around them. Colleges, the very places that should be bursting with energy, laughter, and serendipitous lifelong friendships, have become places full of people with few friends, constantly scrolling in search of the ever decreasing dopamine hit from a like or just something "new".
null
false
null
How can I reduce my carbon footprint?
There are several ways to reduce an individual's carbon footprint. It can be done by reducing your utilization of motor vehicles powered by fossil fuels like diesel or gasoline, including cars, airplanes, boats, trains and trucks. It is also possible to reduce your carbon footprint by easy changes in your lifestyle such as adjusting your thermostat so that you don't heat your house too hot in the winter or too cold in the summer, reducing your red meat intake and sourcing your electricity from renewable energy providers. It is also possible to reduce your carbon footprint by improving your home's energy efficiency by using energy efficient appliances, windows, water heaters and HVAC systems.
null
false
null
A minimum spanning tree (MST) or minimum weight spanning tree is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight. That is, it is a spanning tree whose sum of edge weights is as small as possible. More generally, any edge-weighted undirected graph (not necessarily connected) has a minimum spanning forest, which is a union of the minimum spanning trees for its connected components. There are many use cases for minimum spanning trees. One example is a telecommunications company trying to lay cable in a new neighborhood. If it is constrained to bury the cable only along certain paths (e.g. roads), then there would be a graph containing the points (e.g. houses) connected by those paths. Some of the paths might be more expensive, because they are longer, or require the cable to be buried deeper; these paths would be represented by edges with larger weights. Currency is an acceptable unit for edge weight – there is no requirement for edge lengths to obey normal rules of geometry such as the triangle inequality. A spanning tree for that graph would be a subset of those paths that has no cycles but still connects every house; there might be several spanning trees possible. A minimum spanning tree would be one with the lowest total cost, representing the least expensive path for laying the cable.
Based on the text provided, what does a minimum spanning tree optimize?
A minimum spanning tree optimizes the a graph in a way that minimizes the sum of all the weights of the edges
null
false
null
Classify each of the following as either computer hardware or computer software: hard drive, text editor, keyboard, monitor, mouse, operating system, and Internet browser.
hard drive - computer hardware, text editor - computer software, keyboard - computer hardware, monitor - computer hardware, mouse - computer hardware, operating system - computer software, and Internet browser - computer software.
null
false
117
In this section, we identify various linguistic activities on Quora and propose quantifications of the language usage patterns in this Q&A site. In particular, we show that there exists significant differences in the linguistic structure of the open and the answered questions. Note that most of the measures that we define are simple, intuitive and can be easily obtained automatically from the data (without manual intervention). Therefore the framework is practical, inexpensive and highly scalable. Content of a question text is important to attract people and make them engage more toward it. The linguistic structure (i.e., the usage of POS tags, the use of Out-of-Vocabulary words, character usage etc.) one adopts are key factors for answerability of questions. We shall discuss the linguistic structure that often represents the writing style of a question asker. In fig 1 (a), we observe that askers of open questions generally use more no. of words compared to answered questions. To understand the nature of words (standard English words or chat-like words frequently used in social media) used in the text, we compare the words with GNU Aspell dictionary to see whether they are present in the dictionary or not. We observe that both open questions and answered questions follow similar distribution (see fig 1 (b)). Part-of-Speech (POS) tags are indicators of grammatical aspects of texts. To observe how the Part-of-Speech tags are distributed in the question texts, we define a diversity metric. We use the standard CMU POS tagger BIBREF8 for identifying the POS tags of the constituent words in the question. We define the POS tag diversity (POSDiv) of a question $q_i$ as follows: $POSDiv(q_i) = -\sum _{j \in pos_{set}}p_j\times \log (p_j)$ where $p_j$ is the probability of the $j^{th}$ POS in the set of POS tags. Fig 1 (c) shows that the answered questions have lower POS tag diversity compared to open questions. Question texts undergo several edits so that their readability and the engagement toward them are enhanced. It is interesting to identify how far such edits can make the question different from the original version of it. To capture this phenomena, we have adopted ROUGE-LCS recall BIBREF9 from the domain of text summarization. Higher the recall value, lesser are the changes in the question text. From fig 1 (d), we observe that open questions tend to have higher recall compared to the answered ones which suggests that they have not gone through much of text editing thus allowing for almost no scope of readability enhancement. Psycholinguistic analysis: The way an individual talks or writes, give us clue to his/her linguistic, emotional, and cognitive states. A question asker's linguistic, emotional, cognitive states are also revealed through the language he/she uses in the question text. In order to capture such psycholinguistic aspects of the asker, we use Linguistic Inquiry and Word Count (LIWC) BIBREF10 that analyzes various emotional, cognitive, and structural components present in individuals' written texts. LIWC takes a text document as input and outputs a score for the input for each of the LIWC categories such as linguistic (part-of-speech of the words, function words etc.) and psychological categories (social, anger, positive emotion, negative emotion, sadness etc.) based on the writing style and psychometric properties of the document. In table 1 , we perform a comparative analysis of the asker's psycholinguistic state while asking an open question and an answered question. Askers of open questions use more function words, impersonal pronouns, articles on an average whereas asker of answered questions use more personal pronouns, conjunctions and adverbs to describe their questions. Essentially, open questions lack content words compared to answered questions which, in turn, affects the readability of the question. As far as the psychological aspects are concerned, answered question askers tend to use more social, family, human related words on average compared to an open question asker. The open question askers express more positive emotions whereas the answered question asker tend to express more negative emotions in their texts. Also, answered question askers are more emotionally involved and their questions reveal higher usage of anger, sadness, anxiety related words compared to that of open questions. Open questions, on the other hand, contains more sexual, body, health related words which might be reasons why they do not attract answers. In table 2 , we show a collection of examples of open questions to illustrate that many of the above quantities based on the linguistic activities described in this section naturally correspond to the factors that human judges consider responsible for a question remaining unanswered. This is one of the prime reasons why these quantities qualify as appropriate indicators of answerability. In particular, we show that there exists significant differences in the linguistic structure of the open and the answered questions.
Is there any obvious difference between the open and the answered questions.?
Yes, there are significant differences in the linguistic structure of the open and the answered questions.
null
false
null
What companies are in the Big Four?
The Big Four is made up of Deloitte, PricewaterhouseCoopers (PwC), Ernst and Young (EY), and KPMG. The Big Four are defined as the largest accounting firms in the world, by revenue. In addition to auditing services, these companies offer tax, strategy, and other consulting services.
null
false
null
Does my Myers-Briggs personality type determine my long-term career potential?
No, your Myers-Briggs personality type does not predict your career-path. Your personality is a fundamental part of who you are. The Myers-Briggs Type Indicator (MBTI) is a shorthand way to describe your level of introversion versus extroversion, sensitivity to perceptual queues, and the way you evaluate new information. While these factors could individually contribute to your success or failure in a given career path, none of them can statistically predict success or failure.
null
false
7
This paper provided a taxonomy of stereotype-driven descriptions in the Flickr30K dataset. I have divided these descriptions into two classes: linguistic bias and unwarranted inferences. The former corresponds to the annotators' choice of words when confronted with an image that may or may not match their stereotypical expectancies. The latter corresponds to the tendency of annotators to go beyond what the physical data can tell us, and expand their descriptions based on their past experiences and knowledge of the world. Acknowledging these phenomena is important, because on the one hand it helps us think about what is learnable from the data, and on the other hand it serves as a warning: if we train and evaluate language models on this data, we are effectively teaching them to be biased. I have also looked at methods to detect stereotype-driven descriptions, but due to the richness of language it is difficult to find an automated measure. Depending on whether your goal is production or interpretation, it may either be useful to suppress or to emphasize biases in human language. Finally, I have discussed stereotyping behavior as the addition of a contextual layer on top of a more basic description. This raises the question what kind of descriptions we would like our models to produce. Finally, I have discussed stereotyping behavior as the addition of a contextual layer on top of a more basic description. This raises the question what kind of descriptions we would like our models to produce.
What questions are raised at the end of this paper?
Finally, the author has discussed stereotyping behavior as the addition of a contextual layer on top of a more basic description. This raises the question what kind of descriptions they would like their models to produce.
null
false
null
Where was Bob Dylan born
Duluth, Minnesota
null
false
null
Explain how new technologies, like LLMs and ChatGPT, may revolutionize Application Programming Interfaces (APIs) in the future. Provide an example of how these APIs might work.
As a kid, I loved watching the Jetsons. As I grew older and ended up writing software and creating & implementing architectures for systems, I thought I would never see the day when a Jetsons-like kitchen would be feasible. GPT (ChatGPT, etc.) has changed my mind. Contractual Obligations As an architect, you kind of need to think like a lawyer. You define contracts (APIs) between systems or components. You monitor those APIs for abuse or misuse. I would use Swagger editor to define those contracts, and I would write the APIs to match interactions implied in mock-ups. I would vet the design with the team, I would walk through scenarios to see if the API would fit the use case and would not introduce performance bottlenecks, and I would simulate and automate as much as I could along the journey. I was very careful to get it right the first time. It was never perfect the first time, of course. So, I also had to consider how the API could evolve while minimizing costs/impact to the rest of the system. The API, once deployed, would become something I would guard, like an honest politician guards the #usconstitution. The team had to be careful with any change that impacted the API. API Mesh... More like API mess! APIs rely on APIs which rely on more APIs and libraries. Once a system gets large enough, that complexity becomes a bottleneck. Small changes can have a huge ripple effect. Some approaches (like federated #GraphQL) help manage this complexity nicely. But it still involves an API under which there can exist a lot of fragility. #Agile or not, developing a complicated web of APIs is complex, requires specialized skills, and is full of unforeseen challenges. The Kitchen Conversation, ChatGPT-style The Jetsons' kitchen involves a number of systems, each composed of various parts, sensors, hardware, inputs and outputs. Considering the complexity of interactions between independent systems in a kitchen, the fact that the Jetsons' kitchen components would come from various vendors or manufacturers, traditional API design would not be able to fulfill the vision. Not practical, at all. Now, let's think about APIs and the Jetsons' kitchen, #chatgpt -style. Let's keep it simple: pizza. You (or your robot) make a pizza from scratch. The oven asks the refrigerator: "I see we have a pizza. Can you tell me if this is low-moisture mozzarella?" Refrigerator: "That's not even mozzarella. This pizza must have Muenster cheese as that is all I had in me. And the human removed some of the contents 30 minutes ago, so the probability of Muenster cheese on that pizza is quite high." A sensor from the kitchen who has eyes across the room chimes in at this point. Sensor: "Beware that the crust of this pizza is made of cauliflower. I agree with the video sensor's assessment regarding the type of cheese. No new cheese entered the kitchen recently." Oven: "Anything else I should know?" Sensor: "The temperature of the dough is 47F. This is cooler than the expert chef consensus of using room-temperature dough." Oven: "Great. I know what to do. Since the dough is cool, I will start cooking with a temperature a little lower than the recipe specifies. Then I will increase the temp of bottom element so as to provide a nice crisp crust. I will keep a close eye on moisture and may use convection to mitigate any undesirable levels of moisture." Where is the API that contractually binds the independent systems? APIs are not gone. For instance, the brain of the Oven will have myriad APIs (e.g. for interacting with temperature controllers and sensors, etc.). The key here is that the interface between systems is no longer a fixed, rigid API. No method signatures, no paths, no payload specifications. Instead, a conversation relating to senses and capabilities. What "brain"? Kence Anderson's book Designing Autonomous AI describes nicely how a complex system can be decomposed into a number of independent "brains." Each brain, in his book, can be independently developed and refined, with human guidance married with #datascience, #machinelearning and #dataengineering. What's Cookin'? I am excited for what the future holds. How architects rethink complex system design with regards to APIs is, in my opinion, something that is inevitable. And, as an amateur pizza maker, my taste buds are watering for the day I get some GPT help to make an even better #detroitpizza.
null
false
null
How do I find my Tesla charging station?
To enable Trip Planner, touch 'Settings' > 'Navigation' > 'Trip Planner' on your touchscreen. Once you have selected a site, your vehicle's navigation will automatically route you to the nearest entrance of the property. You can then zoom in on the map to find the exact location of the station.
null
false
164
A core step in many analyses is translating social and cultural concepts (such as hate speech, rumor, or conversion) into measurable quantities. Before we can develop measurements for these concepts (the operationalization step, or the “implementation” step as denoted by BIBREF12 ), we need to define them. In the conceptualization phase we often start with questions such as: who are the domain experts, and how have they approached the topic? We are looking for a definition of the concept that is flexible enough to apply on our dataset, yet formal enough for computational research. For example, our introductory study on hate speech BIBREF0 used a statement on hate speech produced by the European Union Court of Human Rights. The goal was not to implement this definition directly in software but to use it as a reference point to anchor subsequent analyses. If we want to move beyond the use of ad hoc definitions, it can be useful to distinguish between what political scientists Adcock and Collier call the “background concept” and the “systematized concept” BIBREF13 . The background concept comprises the full and diverse set of meanings that might be associated with a particular term. This involves delving into theoretical, conceptual, and empirical studies to assess how a concept has been defined by other scholars and, most importantly, to determine which definition is most appropriate for the particular research question and the theoretical framework in which it is situated. That definition, in turn, represents the systematized concept: the formulation that is adopted for the study. It is important to consider that for social and cultural concepts there is no absolute ground truth. There are often multiple valid definitions for a concept (the “background” concept in the terms of Adcock and Collier), and definitions might be contested over time. This may be uncomfortable for computer scientists, whose primary measure of success is often based on comparing a model's output against “ground truth” or a “gold standard”, e.g., by comparing a sentiment classifier's output against manual annotations. However, the notion of ground truth is uncommon in the humanities and the social sciences and it is often taken too far in machine learning. BIBREF14 notes that in literary criticism and the digital humanities more broadly “interpretation, ambiguity, and argumentation are prized far above ground truth and definitive conclusions". BIBREF15 draw attention to the different attitudes of literary scholars and computational linguists towards ambiguity, stating that “In Computational Linguistics [..] ambiguity is almost uniformly treated as a problem to be solved; the focus is on disambiguation, with the assumption that one true, correct interpretation exists." The latter is probably true for tasks such as spam filtering, but in the social sciences and the humanities many relevant concepts are fundamentally unobservable, such as latent traits of political actors BIBREF16 or cultural fit in organizations BIBREF17 , leading to validation challenges. Moreover, when the ground truth comes from people, it may be influenced by ideological priors, priming, simple differences of opinion or perspective, and many other factors BIBREF18 . We return to this issue in our discussions on validation and analysis. We are looking for a definition of the concept that is flexible enough to apply on our dataset, yet formal enough for computational research.
What kind of definition of the concept are the team looking for?
They are looking for a definition of the concept that is flexible enough to apply on their dataset, yet formal enough for computational research.
null
false
null
Can I unilaterally change terms of a contract?
The general rule is no. If a party were allowed to unilaterally change the term of a contract, there may be no value in having a contract in the first place as an obvious change would be to change what a party has promised in the contract to little or nothing at all. This would erode any value of having a contract in the first place. However, there are contracts that allow a party to unilaterally change certain terms of the contract but they should not take away the substantial benefit of the contract from the other party or parties. If a party has agreed to allow the other to vary any of the terms of the contract, this is not ideal and the implications would depend on the extent and nature of the variations made. It is therefore prudent for both parties to avoid provisions to allow unilateral changes to the contract or at the very least, expressly limit the scope of unilateral changes.
1606.05320
false
null
FLOAT SELECTED: Table 1: Predictive loglikelihood (LL) comparison, sorted by validation set performance. FLOAT SELECTED: Table 1: Predictive loglikelihood (LL) comparison, sorted by validation set performance.
How large is the gap in performance between the HMMs and the LSTMs?
With similar number of parameters, the log likelihood is about 0.1 lower for LSTMs across datasets. When the number of parameters in LSTMs is increased, their log likelihood is up to 0.7 lower.
null
false
343
The use of categorical attributes (e.g., user, topic, aspects) in the sentiment analysis community BIBREF0, BIBREF1, BIBREF2 is widespread. Prior to the deep learning era, these information were used as effective categorical features BIBREF3, BIBREF4, BIBREF5, BIBREF6 for the machine learning model. Recent work has used them to improve the overall performance BIBREF7, BIBREF8, interpretability BIBREF9, BIBREF10, and personalization BIBREF11 of neural network models in different tasks such as sentiment classification BIBREF12, review summarization BIBREF13, and text generation BIBREF8. In particular, user and product information have been widely incorporated in sentiment classification models, especially since they are important metadata attributes found in review websites. BIBREF12 first showed significant accuracy increase of neural models when these information are used. Currently, the accepted standard method is to use them as additional biases when computing the weights $a$ in the attention mechanism, as introduced by BIBREF7 as: where $u$ and $p$ are the user and product embeddings, and $h$ is a word encoding from BiLSTM. Since then, most of the subsequent work attempted to improve the model by extending the model architecture to be able to utilize external features BIBREF14, handle cold-start entities BIBREF9, and represent user and product separately BIBREF15. Intuitively, however, this method is not the ideal method to represent and inject attributes because of two reasons. First, representing attributes as additional biases cannot model the relationship between the text and attributes. Rather, it only adds a user- and product-specific biases that are independent from the text when calculating the attention weights. Second, injecting the attributes in the attention mechanism means that user and product information are only used to customize how the model choose which words to focus on, as also shown empirically in previous work BIBREF7, BIBREF15. However, we argue that there are more intuitive locations to inject the attributes such as when contextualizing words to modify their sentiment intensity. We propose to represent user and product information as weight matrices (i.e., $W$ in the equation above). Directly incorporating these attributes into $W$ leads to large increase in parameters and subsequently makes the model difficult to optimize. To mitigate these problems, we introduce chunk-wise importance weight matrices, which (1) uses a weight matrix smaller than $W$ by a chunk size factor, and (2) transforms these matrix into gates such that it corresponds to the relative importance of each neuron in $W$. We investigate the use of this method when injected to several locations in the base model: word embeddings, BiLSTM encoder, attention mechanism, and logistic classifier. The results of our experiments can be summarized in three statements. First, our preliminary experiments show that doing bias-based attribute representation and attention-based injection is not an effective method to incorporate user and product information in sentiment classification models. Second, despite using only a simple BiLSTM with attention classifier, we significantly outperform previous state-of-the-art models that use more complicated architectures (e.g., models that use hierarchical models, external memory networks, etc.). Finally, we show that these attribute representations transfer well to other tasks such as product category classification and review headline generation. First, our preliminary experiments show that doing bias-based attribute representation and attention-based injection is not an effective method to incorporate user and product information in sentiment classification models.
Is doing bias-based attribute representation and attention-based injection an effective method to incorporate user and product information in sentiment classification models?
No, it doesn't.
null
false
null
Why babies cry ?
Since babies can’t speak, crying is their way to communicate what they need or feel. For instance, babies can cry because they are hungry or because their diaper is dirty or because their tummy hurts or just because they to let go of emotions. The type, the intensity and the length of cries can vary from one baby to another.
null
false
null
William Shakespeare (bapt. 26 April 1564 – 23 April 1616) was an English playwright, poet and actor. He is regarded as the greatest writer in the English language and the world's pre-eminent dramatist. He is often called England's national poet and the "Bard of Avon" (or simply "the Bard"). His extant works, including collaborations, consist of some 39 plays, 154 sonnets, three long narrative poems, and a few other verses, some of uncertain authorship. His plays have been translated into every major living language and are performed more often than those of any other playwright. He remains arguably the most influential writer in the English language, and his works continue to be studied and reinterpreted.
Who is William Shakespeare
Many experts agree William Shakespeare is the most important author in the English language. Although, this is a hotly debated topic, he is still the best known from the Elizabethan period. He has written many plays. His most famous tragedy is Hamlet. He also wrote much poetry in the form of Sonnets. Arguably, the most interesting sonnet by William Shakespeare is Sonnet 95.
null
false
null
What is Ruby?
Ruby is an interpreted, high-level, general-purpose programming language which supports multiple programming paradigms. It was designed with an emphasis on programming productivity and simplicity. In Ruby, everything is an object, including primitive data types. It was developed in the mid-1990s by Yukihiro "Matz" Matsumoto in Japan.
null
false
null
What is the Monty Hall problem?
The Monty Hall problem is a probability brain teaser that many consider to be counterintuitive. The problem is stated as follows: Imagine you are a contestant on a game show and the host shows you 3 doors. Behind one of the doors is a car and behind the other two are goats. The prizes have been randomly assigned to the doors and you want to win the car. The host allows you to pick one of the doors as a first selection. Then, the host opens one of the remaining doors and reveals a goat is behind it. The host now allows you to decide if you want to switch from your selected door to the remaining unopened door. Should you switch, to maximize your probability of winning the car? Most people believe switching does not improve your probability of winning the car. However switching does increase your probability of winning the car from 1/3 to 2/3. An intuitive way to understand this problem is by first considering the initial state. At your first selection the probability that you have selected the right door is 1/3 and the probability that the car remains behind one of the two unselected doors is 2/3. Then the host reveals more information. The host reveals that one of the unselected doors does not have the car behind it. The probability that the originally unselected doors must have the car behind one of them is still 2/3, but you now know one of them specifically which does not have the car. Therefore, the remaining unopened unselected door must have a probability of 2/3 to contain the car, and your originally selected door must have a probability of 1/3. When offered the chance, you should switch to maximize your chances of winning the car.
null
false
357
The ideological leanings of a person within the left-right political spectrum are often reflected by how one feels about different topics and by means of preferences among various choices on particular issues. For example, a left-leaning person would prefer nationalization and state control of public services (such as healthcare) where privatization would be often preferred by people that lean towards the right. Likewise, a left-leaning person would often be supportive of immigration and will often talk about immigration in a positive manner citing examples of benefits of immigration on a country's economy. A right-leaning person, on the other hand, will often have a negative opinion about immigration. Most of the existing works on political ideology detection from text have focused on utilizing bag-of-words and other syntactic features to capture variations in language use BIBREF0 , BIBREF1 , BIBREF2 . We propose an alternative mechanism for political ideology detection based on sentiment analysis. We posit that adherents of a political ideology generally have similar sentiment toward specific topics (for example, right wing followers are often positive towards free markets, lower tax rates, etc.) and thus, a political ideology can be represented by a characteristic sentiment distribution over different topics (Section SECREF3 ). This topic-specific sentiment representation of a political ideology can then be used for automatic ideology detection by comparing the topic-specific sentiments as expressed by the content in a document (news article, magazine article, collection of social media posts by a user, utterances in a conversation, etc.). In order to validate our hypothesis, we consider exploiting the sentiment information towards topics from archives of political debates to build a model for identifying political orientation of speakers as one of right or left leaning, which corresponds to republicans and democrats respectively, within the context of US politics. This is inspired by our observation that the political leanings of debators are often expressed in debates by way of speakers' sentiments towards particular topics. Parliamentary or Senate debates often bring the ideological differences to the centre stage, though somewhat indirectly. Heated debates in such forums tend to focus on the choices proposed by the executive that are in sharp conflict with the preference structure of the opposition members. Due to this inherent tendency of parliamentary debates to focus on topics of disagreement, the sentiments exposited in debates hold valuable cues to identify the political orientation of the participants. We develop a simple classification model that uses a topic-specific sentiment summarization for republican and democrat speeches separately. Initial results of experiments conducted using a widely used dataset of US Congress debates BIBREF3 are encouraging and show that this simple model compares well with classification models that employ state-of-the-art distributional text representations (Section SECREF4 ). Initial results of experiments conducted using a widely used dataset of US Congress debates are encouraging and show that this simple model compares well with classification models that employ state-of-the-art distributional text representations (Section SECREF4 ).
What dataset do they use for their experiments?
A widely used dataset of US Congress debates.
null
false
null
The Schauspielhaus Zürich (English: Zürich playhouse) is one of the most prominent and important[citation needed] theatres in the German-speaking world. It is also known as "Pfauenbühne" (Peacock Stage). The large theatre has 750 seats. The Schauspielhaus also operates three stages in the Schiffbau in the western part of Zürich, the Schiffbau/Halle (400 seats), the Schiffbau/Box (up to 200 seats) and the Schiffbau/Matchbox (80 seats).
Given this paragraph, show me how to calculate how many total patrons can be served by all of the Schauspielhaus theaters?
1430 total patrons can be served. There are 750 seats in Schauspielhaus Zürich, 400 in the Schiffbau/Halle, 200 in the Schiffbau/Box and 80 in the Schiffbau/Matchbox. This totals to 750 + 400 + 200 + 80 = 1430 seats. Each patron requires one seat
null
false
null
Classify the following as being either being a type of shark or a type of bird: hammerhead, eagle, tiger, sawtooth, blacktip, oriole, osprey, falcon, great white.
Hammerhead, tiger, sawtooth, blacktip, and great white are types of sharks. Eagle, oriole, osprey, and falcon are types of birds.
null
false
null
List some of the extinct houses in the series "A Song of Ice and Fire."
Greystark, Fisher, Lothston, Qoherys, Caron, Toyne, Durrendon, Gardener
null
false
null
Princess Vera Ignatievna Gedroits was a Russian doctor of medicine and author. She was the first woman military surgeon in Russia, the first woman professor of surgery, and the first woman to serve as a physician to the Imperial Court of Russia. Following her involvement in a student movement, Gedroits was unable to complete her studies in Russia, and despite being openly lesbian, entered into a marriage of convenience, which allowed her to obtain a passport in another name and leave the country. In Switzerland, she enrolled in the medical courses of César Roux and graduated in 1898, working as Roux’s assistant, but returned to Russia because of illnesses in her family. As a young physician, Gedroits was concerned at the low standards of hygiene, nutrition and sanitation, and made recommendations to improve conditions. In the Russo-Japanese War, she performed abdominal surgeries against established policy, leading to a change in the way battlefield medicine was performed. Much decorated for her war service, she served as physician to the royal court until the outbreak of World War I, training the Tsarina Alexandra and her daughters as nurses. At the beginning of the Revolution, Gedroits returned to the battle front. Wounded, she was evacuated to Kiev, where she resumed her work as a physician and academic. In 1921, she was hired to teach pediatric surgery at the Kiev Medical Institute and within two years was appointed a professor of medicine. Soviet purges at that time removed her from office in 1930 and denied her a pension. Gedroits turned her attention to writing autobiographical novels until her death from uterine cancer in 1932.
Name a battlefield surgeon
Vera Gedroits was a ground-breaking physician and surgeon who improved surgical practices on the battlefield and became the first woman to serve as a physician to the Imperial Court of Russia.
null
false
null
Did Sir Winston Churchill win the Nobel Peace Prize?
No, Sir Winston Churchill did not win the Nobel Peace Prize. He was nominated for the Nobel Peace Prize twice, in 1945 and in 1950, and did not win it. However, he did win the Nobel Prize in Literature in 1953.
null
false
null
How did 'March Madness' get its name?
"March Madness" was originally coined after a high school basketball tournament in Illinois dating back to 1908. Henry V. Porter wrote an essay titled "March Madness" in the Illinois Interscholastic in 1939 to commemorate the tournament. Other media outlets embraced the term as the tournament continued to sell out and thrive for decades. Brent Musburger, a former sportswriter from Chicago, brought the term "March Madness" to the NCAA tournament when covering the event for CBS in 1982. The phrase became synonymous with the tournament ever since. In 2010, the NCAA made it official and paid IHSA (The Illinois High School Association) $17.2M for the rights to the phrase, and began using the trademark in marketing materials, logos and other branding material. While only used for the men's tournament for decades, the NCAA only began using the phrase "March Madness" for the women's tournament in 2022.
null
false
null
What are the main components of AWS Kinesis?
Kinesis is a distributed system that is composed of several components. The main components of Kinesis are: 1. Streams: A stream is a group of records that are ingested and processed in real-time by Kinesis. A stream consists of one or more shards, and each shard is an ordered sequence of data records. 2. Shards: A shard is a sequence of records in a stream. It is equivalent to a data partition. Each shard has a specific sequence of records and provides a specific read and write capacity. 3. Producers: Producers are responsible for sending data to a Kinesis stream. Producers can be any application that can send data to an HTTP/HTTPS endpoint. 4. Consumers: Consumers are responsible for reading and processing data from a Kinesis stream. Consumers can be any application that can read data from an HTTP/HTTPS endpoint.
null
false
null
Which characters belong to DC or Marvel Universe? Iron Man, Batman, Steel
Batman and Steel are DC, Iron Man is Marvel
null
false
121
Until recently, efforts to bring out semantics from structured-data relied heavily on expert knowledge BIBREF22, BIBREF8. For example, in order to better transcribe numerical time series of weather data to a textual forecast, Reiter et al. BIBREF8 devise complex template schemes in collaboration with weather experts to build a consistent set of data-to-word rules. Modern approaches to the wide range of tasks based on structured-data (e.g. table retrieval BIBREF2, BIBREF23, table classification BIBREF24, question answering BIBREF25) now propose to leverage progress in deep learning to represent these data into a semantic vector space (also called embedding space). In parallel, an emerging task, called “data-to-text", aims at describing structured data into a natural language description. This task stems from the neural machine translation (NMT) domain, and early work BIBREF26, BIBREF27, BIBREF10 represent the data records as a single sequence of facts to be entirely translated into natural language. Wiseman et al. BIBREF10 show the limits of traditional NMT systems on larger structured-data, where NMT systems fail to accurately extract salient elements. To improve these models, a number of work BIBREF28, BIBREF12, BIBREF29 proposed innovating decoding modules based on planning and templates, to ensure factual and coherent mentions of records in generated descriptions. For example, Puduppully et al. BIBREF12 propose a two-step decoder which first targets specific records and then use them as a plan for the actual text generation. Similarly, Li et al. BIBREF28 proposed a delayed copy mechanism where their decoder also acts in two steps: 1) using a classical LSTM decoder to generate delexicalized text and 2) using a pointer network BIBREF30 to replace placeholders by records from the input data. Closer to our work, very recent work BIBREF11, BIBREF19, BIBREF13 have proposed to take into account the data structure. More particularly, Puduppully et al. BIBREF13 follow entity-centric theories BIBREF31, BIBREF32 and propose a model based on dynamic entity representation at decoding time. It consists in conditioning the decoder on entity representations that are updated during inference at each decoding step. On the other hand, Liu et al. BIBREF11, BIBREF19 rather focus on introducing structure into the encoder. For instance, they propose a dual encoder BIBREF19 which encodes separately the sequence of element names and the sequence of element values. These approaches are however designed for single-entity data structures and do not account for delimitation between entities. Our contribution differs from previous work in several aspects. First, instead of flatly concatenating elements from the data-structure and encoding them as a sequence BIBREF11, BIBREF12, BIBREF10, we constrain the encoding to the underlying structure of the input data, so that the delimitation between entities remains clear throughout the process. Second, unlike all works in the domain, we exploit the Transformer architecture BIBREF21 and leverage its particularity to directly compare elements with each others in order to avoid arbitrary assumptions on their ordering. Finally, in contrast to BIBREF33, BIBREF13 that use a complex updating mechanism to obtain a dynamic representation of the input data and its entities, we argue that explicit hierarchical encoding naturally guides the decoding process via hierarchical attention. First, instead of flatly concatenating elements from the data-structure and encoding them as a sequence [18,28,39], we constrain the encoding to the underlying structure of the input data, so that the delimitation between entities remains clear throughout the process.
Why do they constrain the encoding to the underlying structure of the input data?
To ensure the delimitation between entities remains clear throughout the process.
2001.05970
false
null
We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest).
Which major geographical regions are studied?
Northeast U.S, South U.S., West U.S. and Midwest U.S.
null
false
193
Natural language inference (NLI) is a widely-studied problem in natural language processing. It aims at comparing a pair of sentences (i.e. a premise and a hypothesis), and inferring the relationship between them (i.e., entailment, neutral and contradiction). Large-scaled datasets like SNLI BIBREF0 and MultiNLI BIBREF1 have been created by crowd-sourcing and fertilized NLI research substantially. However, several works BIBREF2, BIBREF3, BIBREF4 have pointed out that crowd-sourcing workers have brought a bias pattern named annotation artifacts in these NLI datasets. Such artifacts in hypotheses can reveal the labels and make it possible to predict the labels solely by looking at the hypotheses. For example, models trained on SNLI with only the hypotheses can achieve an accuracy of 67.0%, despite the always predicting the majority-class baseline is only 34.3% BIBREF2. Classifiers trained on NLI datasets are supposed to make predictions by understanding the semantic relationships between given sentence pairs. However, it is shown that models are unintentionally utilizing the annotation artifacts BIBREF4, BIBREF2. If the evaluation is conducted under a similar distribution as the training data, e.g., with the given testing set, models will enjoy additional advantages, making the evaluation results over-estimated. On the other hand, if the bias pattern cannot be generalized to the real-world, it may introduce noise to models, thus hurting the generalization ability. In this paper, we use cross-dataset testing to better assess models' generalization ability. We investigate the impacts of annotation artifacts in cross-dataset testing. Furthermore, we propose an easy-adopting debiasing training framework, which doesn't require any additional data or annotations, and apply it to the high-performing Densely Interactive Inference Network BIBREF5. Experiments show that our method can effectively mitigate the bias pattern and improve the cross-dataset generalization ability of models. To the best of our knowledge, our work is the first attempt to alleviate the annotation artifacts without any extra resources. Experiments show that our method can effectively mitigate the bias pattern and improve the cross-dataset generalization ability of models.
Can the method effectively mitigate the bias pattern and improve the cross-dataset generalization ability of models?
Yes, they can.
null
false
null
Can dogs look up?
Despite what the character "Big Al" in the 2004 Horror Comedy "Shaun of the Dead" thought, dogs can, indeed, look up. The film is responsible for perpetuating the myth even though in context the quote is meant to throw doubt on the validity of anything that "Big Al" says.
null
false
451
A kernel machine uses the kernel trick to map the non-linear problem into a feature space where the problem may be linearly separable with an appropriate kernel function. Next we give the formal definition of kernel machines discussed in this paper. Given a training data set {X, y} of n training instances where {X ∈ R n×d , y ∈ R n } = {(x 1 , y 1 ), (x 2 , y 2 ), ..., (x n , y n )}, and (x i , y i ) denotes the instance x i ∈ R d with its label y i , the objective of the kernel machine training is to find an optimal ω * which minimizes the structural risk as follows. min where λ denotes the regularization constant and f (ω, x i ) = ω, φ(x i ) . The variable ω is defined on the reproducing kernel Hilbert space (RKHS) and •, • is the inner product on the RKHS. The function φ(•) maps the instances from their original data space to a higher dimensional feature space induced by the kernel function. Assume the loss l(•, •) is an affine function of ω. The representer theorem shows that a minimizer of the optimization problem () where k(x i , x j ) denotes a positive definite kernel function and k(x i , x j ) = φ(x i ), φ(x j ) . By substituting the expressions of f (ω, x i ) and ω into the Equation (1), we have the objective with respect to α below. where α = [α 1 . . . α n ] T is an n-dimension vector, each dimension of which corresponds to the contribution of a training instance to the kernel machine. Then, we can derive that the Hessian matrix H = [H ij ] n×n of Problem () is equal to the kernel matrix. The element in the i-th row and j-th column of the matrix H is H ij = k(x i , x j ). The derivations of the first and second derivatives can be found in the supplementary material. For clarity, we use kernel matrix H to denote the Hessian matrix of the kernel machine in the rest of the paper. Assume the loss l(·, ·) is an affine function of ω.
Assuming c in Eq. (1) denotes the labels; did you mean the loss \ell(f(X), c) is a convex function and not affine?
The loss is assumed to be a convex and affine function. We've reorganized the equations in Section 2 for clarity.
null
false
null
The Beatles were an English rock band, formed in Liverpool in 1960, that comprised John Lennon, Paul McCartney, George Harrison and Ringo Starr. They are regarded as the most influential band of all time and were integral to the development of 1960s counterculture and popular music's recognition as an art form. Rooted in skiffle, beat and 1950s rock 'n' roll, their sound incorporated elements of classical music and traditional pop in innovative ways; the band also explored music styles ranging from folk and Indian music to psychedelia and hard rock. As pioneers in recording, songwriting and artistic presentation, the Beatles revolutionised many aspects of the music industry and were often publicised as leaders of the era's youth and sociocultural movements. Led by primary songwriters Lennon and McCartney, the Beatles evolved from Lennon's previous group, the Quarrymen, and built their reputation playing clubs in Liverpool and Hamburg over three years from 1960, initially with Stuart Sutcliffe playing bass. The core trio of Lennon, McCartney and Harrison, together since 1958, went through a succession of drummers, including Pete Best, before asking Starr to join them in 1962. Manager Brian Epstein moulded them into a professional act, and producer George Martin guided and developed their recordings, greatly expanding their domestic success after signing to EMI Records and achieving their first hit, "Love Me Do", in late 1962. As their popularity grew into the intense fan frenzy dubbed "Beatlemania", the band acquired the nickname "the Fab Four", with Epstein, Martin or another member of the band's entourage sometimes informally referred to as a "fifth Beatle". By early 1964, the Beatles were international stars and had achieved unprecedented levels of critical and commercial success. They became a leading force in Britain's cultural resurgence, ushering in the British Invasion of the United States pop market, and soon made their film debut with A Hard Day's Night (1964). A growing desire to refine their studio efforts, coupled with the untenable nature of their concert tours, led to the band's retirement from live performances in 1966. At this time, they produced records of greater sophistication, including the albums Rubber Soul (1965), Revolver (1966) and Sgt. Pepper's Lonely Hearts Club Band (1967), and enjoyed further commercial success with The Beatles (also known as "the White Album", 1968) and Abbey Road (1969). The success of these records heralded the album era, as albums became the dominant form of record consumption over singles; they also increased public interest in psychedelic drugs and Eastern spirituality, and furthered advancements in electronic music, album art and music videos. In 1968, they founded Apple Corps, a multi-armed multimedia corporation that continues to oversee projects related to the band's legacy. After the group's break-up in 1970, all principal former members enjoyed success as solo artists and some partial reunions have occurred. Lennon was murdered in 1980 and Harrison died of lung cancer in 2001. McCartney and Starr remain musically active. The Beatles are the best-selling music act of all time, with estimated sales of 600 million units worldwide. They are the most successful act in the history of the US Billboard charts. They hold the record for most number-one albums on the UK Albums Chart (15), most number-one hits on the US Billboard Hot 100 chart (20), and most singles sold in the UK (21.9 million). The band received many accolades, including seven Grammy Awards, four Brit Awards, an Academy Award (for Best Original Song Score for the 1970 documentary film Let It Be) and fifteen Ivor Novello Awards. They were inducted into the Rock and Roll Hall of Fame in 1988, and each principal member was inducted individually between 1994 and 2015. In 2004 and 2011, the group topped Rolling Stone's lists of the greatest artists in history. Time magazine named them among the 20th century's 100 most important people.
Create a list using bullets of the Beatles band members and tell me if they are alive or dead using the format {Name} / {Alive or Dead}
The Beatles members are : - John Lennon / Dead - Paul McCartney / Alive - George Harrison / Dead - Ringo Starr / Alive
null
false
null
What should I get my wife for mother's day?
Mother's day is a great opportunity for a thoughtful gift. If she has a car, getting her car washed can be a nice surprise. You can pair that with a nice meal out, but instead of having her pick the place, give two or three suggestions and let her choose. If she often plans the meals, asking her to just pick the places without options is shifting the cognitive load to her and not a gift if she already plans the meals daily. If you would a gift to wrap and are not sure what she would like, this is where you can phone a friend or female family member and see if they have any suggestions for gifts. If none of these work for you, go to a store that she likes and around mother's days most stores will have items marketed as mother's day gifts. If that is still too much, Amazon will have holiday specific gifts and you can include gift wrapping on check out.
null
false
null
An analgesic drug, also called simply an analgesic (American English), analgaesic (British English), pain reliever, or painkiller, is any member of the group of drugs used to achieve relief from pain (that is, analgesia or pain management). Analgesics are conceptually distinct from anesthetics, which temporarily reduce, and in some instances eliminate, sensation, although analgesia and anesthesia are neurophysiologically overlapping and thus various drugs have both analgesic and anesthetic effects.
Can you please help me with a couple of points about analgesic drug, with the given text as a reference.
1. Any medication from the class of medications intended to treat pain is referred to as an analgesic medicine, also known as a simple analgesic (American English), an analgaesic (British English), pain reliever, or painkiller. (that is, analgesia or pain management). 2. Although analgesia and anaesthesia are neurophysiologically similar and thus certain medications have both analgesic and anaesthetic effects, analgesics are conceptually separate from anesthetics, which temporarily diminish, and in some cases eradicate, feeling.
null
false
388
We created a new dataset called CW-USPD-2016 (check-worthiness in the US presidential debates 2016) for finding check-worthy claims in context. In particular, we used four transcripts of the 2016 US election: one vice-presidential and three presidential debates. For each debate, we used the publicly-available manual analysis about it from nine reputable fact-checking sources, as shown in Table TABREF7. This could include not just a statement about factuality, but any free text that journalists decided to add, e.g., links to biographies or behavioral analysis of the opponents and moderators. We converted this to binary annotation about whether a particular sentence was annotated for factuality by a given source. Whenever one or more annotations were about part of a sentence, we selected the entire sentence, and when an annotation spanned over multiple sentences, we selected each of them. Ultimately, we ended up with a dataset of four debates, with a total of 5,415 sentences. The agreement between the sources was low as Table TABREF8 shows: only one sentence was selected by all nine sources, 57 sentences by at least five, 197 by at least three, 388 by at least two, and 880 by at least one. The reason for this is that the different media aimed at annotating sentences according to their own editorial line, rather than trying to be exhaustive in any way. This suggests that the task of predicting which sentence would contain check-worthy claims will be challenging. Thus, below we focus on a ranking task rather than on absolute predictions. Moreover, we predict which sentence would be selected (i) by at least one of the media, or (ii) by a specific medium. Note that the investigative journalists did not select the check-worthy claims in isolation. Our analysis shows that these include claims that were highly disputed during the debate, that were relevant to the topic introduced by the moderator, etc. We will make use of these contextual dependencies below, which is something that was not previously tried in related work. In particular, we used four transcripts of the 2016 US election: one vice-presidential and three presidential debates.
What four transcripts of the 2016 US election were used?
One vice-presidential and three presidential debates.
1811.09353
false
null
We evaluate our model on both English and Chinese segmentation. For both languages we used standard datasets for word segmentation and language modeling. For all datasets, we used train, validation and test splits. Since our model assumes a closed character set, we removed validation and test samples which contain characters that do not appear in the training set. In the English corpora, whitespace characters are removed. In Chinese, they are not present to begin with. Refer to Appendix SECREF9 for dataset statistics. We evaluate our model on both English and Chinese segmentation
What language do they look at?
The answers are shown as follows: * English * Chinese
null
false
220
Several recent papers have described approaches for learning deep communicating policies (DCPs): decentralized representations of behavior that enable multiple agents to communicate via a differentiable channel that can be formulated as a recurrent neural network. DCPs have been shown to solve a variety of coordination problems, including reference games BIBREF0 , logic puzzles BIBREF1 , and simple control BIBREF2 . Appealingly, the agents' communication protocol can be learned via direct backpropagation through the communication channel, avoiding many of the challenging inference problems associated with learning in classical decentralized decision processes BIBREF3 . But analysis of the strategies induced by DCPs has remained a challenge. As an example, fig:teaser depicts a driving game in which two cars, which are unable to see each other, must both cross an intersection without colliding. In order to ensure success, it is clear that the cars must communicate with each other. But a number of successful communication strategies are possible—for example, they might report their exact $(x, y)$ coordinates at every timestep, or they might simply announce whenever they are entering and leaving the intersection. If these messages were communicated in natural language, it would be straightforward to determine which strategy was being employed. However, DCP agents instead communicate with an automatically induced protocol of unstructured, real-valued recurrent state vectors—an artificial language we might call “neuralese,” which superficially bears little resemblance to natural language, and thus frustrates attempts at direct interpretation. We propose to understand neuralese messages by translating them. In this work, we present a simple technique for inducing a dictionary that maps between neuralese message vectors and short natural language strings, given only examples of DCP agents interacting with other agents, and humans interacting with other humans. Natural language already provides a rich set of tools for describing beliefs, observations, and plans—our thesis is that these tools provide a useful complement to the visualization and ablation techniques used in previous work on understanding complex models BIBREF4 , BIBREF5 . While structurally quite similar to the task of machine translation between pairs of human languages, interpretation of neuralese poses a number of novel challenges. First, there is no natural source of parallel data: there are no bilingual “speakers” of both neuralese and natural language. Second, there may not be a direct correspondence between the strategy employed by humans and DCP agents: even if it were constrained to communicate using natural language, an automated agent might choose to produce a different message from humans in a given state. We tackle both of these challenges by appealing to the grounding of messages in gameplay. Our approach is based on one of the core insights in natural language semantics: messages (whether in neuralese or natural language) have similar meanings when they induce similar beliefs about the state of the world. Based on this intuition, we introduce a translation criterion that matches neuralese messages with natural language strings by minimizing statistical distance in a common representation space of distributions over speaker states. We explore several related questions: Our translation model and analysis are general, and in fact apply equally to human–computer and human–human translation problems grounded in gameplay. In this paper, we focus our experiments specifically on the problem of interpreting communication in deep policies, and apply our approach to the driving game in fig:teaser and two reference games of the kind shown in fig:bird-examples. We find that this approach outperforms a more conventional machine translation criterion both when attempting to interoperate with neuralese speakers and when predicting their state. Our approach is based on one of the core insights in natural language semantics: messages (whether in neuralese or natural language) have similar meanings when they induce similar beliefs about the state of the world.
What insight is their approach based on?
Messages (whether in neuralese or natural language) have similar meanings when they induce similar beliefs about the state of the world.
null
false
null
If a straight line passes through the points (4,2) and (8,6) then where does it cross the x axis? Explain your answer.
First, you must calculate the gradient of the line by finding the change in y over the change in x. In this case, between the first point and the second point the y co-ordinate increases by 6-2=4 and the x co-ordinate increases by 8-4=4, so the gradient is 4/4=1. We are looking at a straight line with the equation y=x+c for some value of c, which we can find by plugging in one of our points, for example, 4=2+c, so c must be 2, and the equation of the line is y=x+2. When a line crosses the x axis then it's y co-ordinate is 0. Therefore in this case the line crosses the x axis at x=-2.
null
false
133
In this section, we first describe the two datasets used in our experiments: (1) Wikipedia, and (2) arXiv. Then, we report the experimental details and results. As discussed above, our model has two main components — BILSTM and INCEPTION— which generate textual and visual representations, respectively.
What are the two main components in the model of this paper?
ILSTM and INCEPTION— which generate textual and visual representations, respectively.
null
false
null
Tell me whether these are primary or secondary colors: Yellow, Blue, Green, Orange, Red, Violet
Primary Colors: Red, Yellow, Blue Secondary Colors: Green, Orange, Violet
null
false
187
Knowledge graphs (KGs) containing relationship triples (subject, relation, object), denoted as (s, r, o), are the useful resources for many NLP and especially information retrieval applications such as semantic search and question answering BIBREF0 . However, large knowledge graphs, even containing billions of triples, are still incomplete, i.e., missing a lot of valid triples BIBREF1 . Therefore, much research efforts have focused on the knowledge graph completion task which aims to predict missing triples in KGs, i.e., predicting whether a triple not in KGs is likely to be valid or not BIBREF2 , BIBREF3 , BIBREF4 . To this end, many embedding models have been proposed to learn vector representations for entities (i.e., subject/head entity and object/tail entity) and relations in KGs, and obtained state-of-the-art results as summarized by BIBREF5 and BIBREF6 . These embedding models score triples (s, r, o), such that valid triples have higher plausibility scores than invalid ones BIBREF2 , BIBREF3 , BIBREF4 . For example, in the context of KGs, the score for (Melbourne, cityOf, Australia) is higher than the score for (Melbourne, cityOf, United Kingdom). Triple modeling is applied not only to the KG completion, but also for other tasks which can be formulated as a triple-based prediction problem. An example is in search personalization, one would aim to tailor search results to each specific user based on the user's personal interests and preferences BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Here the triples can be formulated as (submitted query, user profile, returned document) and used to re-rank documents returned to a user given an input query, by employing an existing KG embedding method such as TransE BIBREF3 , as proposed by BIBREF12 . Previous studies have shown the effectiveness of modeling triple for either KG completion or search personalization. However, there has been no single study investigating the performance on both tasks. Conventional embedding models, such as TransE BIBREF3 , DISTMULT BIBREF13 and ComplEx BIBREF14 , use addition, subtraction or simple multiplication operators, thus only capture the linear relationships between entities. Recent research has raised interest in applying deep neural networks to triple-based prediction problems. For example, BIBREF15 proposed ConvKB—a convolutional neural network (CNN)-based model for KG completion and achieved state-of-the-art results. Most of KG embedding models are constructed to modeling entries at the same dimension of the given triple, where presumably each dimension captures some relation-specific attribute of entities. To the best of our knowledge, however, none of the existing models has a “deep” architecture for modeling the entries in a triple at the same dimension. BIBREF16 introduced capsule networks (CapsNet) that employ capsules (i.e., each capsule is a group of neurons) to capture entities in images and then uses a routing process to specify connections from capsules in a layer to those in the next layer. Hence CapsNet could encode the intrinsic spatial relationship between a part and a whole constituting viewpoint invariant knowledge that automatically generalizes to novel viewpoints. Each capsule accounts for capturing variations of an object or object part in the image, which can be efficiently visualized. Our high-level hypothesis is that embedding entries at the same dimension of the triple also have these variations, although it is not straightforward to be visually examined. To that end, we introduce CapsE to explore a novel application of CapsNet on triple-based data for two problems: KG completion and search personalization. Different from the traditional modeling design of CapsNet where capsules are constructed by splitting feature maps, we use capsules to model the entries at the same dimension in the entity and relation embeddings. In our CapsE, INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are unique INLINEFORM3 -dimensional embeddings of INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. The embedding triple [ INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ] of (s, r, o) is fed to the convolution layer where multiple filters of the same INLINEFORM10 shape are repeatedly operated over every row of the matrix to produce INLINEFORM11 -dimensional feature maps. Entries at the same dimension from all feature maps are then encapsulated into a capsule. Thus, each capsule can encode many characteristics in the embedding triple to represent the entries at the corresponding dimension. These capsules are then routed to another capsule which outputs a continuous vector whose length is used as a score for the triple. Finally, this score is used to predict whether the triple (s, r, o) is valid or not. In summary, our main contributions from this paper are as follows: INLINEFORM0 We propose an embedding model CapsE using the capsule network BIBREF16 for modeling relationship triples. To our best of knowledge, our work is the first consideration of exploring the capsule network to knowledge graph completion and search personalization. INLINEFORM0 We evaluate our CapsE for knowledge graph completion on two benchmark datasets WN18RR BIBREF17 and FB15k-237 BIBREF18 . CapsE obtains the best mean rank on WN18RR and the highest mean reciprocal rank and highest Hits@10 on FB15k-237. INLINEFORM0 We restate the prospective strategy of expanding the triple embedding models to improve the ranking quality of the search personalization systems. We adapt our model to search personalization and evaluate on SEARCH17 BIBREF12 – a dataset of the web search query logs. Experimental results show that our CapsE achieves the new state-of-the-art results with significant improvements over strong baselines. Different from the traditional modeling design of CapsNet where capsules are constructed by splitting feature maps, we use capsules to model the entries at the same dimension in the entity and relation embeddings.
How to design the model of CapsE?
The authors use capsules to model the entries at the same dimension in the entity and relation embeddings.
null
false
null
What is the difference between a sunny side up egg and a scrambled egg?
In a sunny side up egg, the yolk is intact, whereas in a scrambled egg, the egg is beaten and scrambled.
null
false
203
Our first experiment tries to answer what happens when more and more languages are added to the model. There are two settings: adding languages in a random order, or adding the most closely related languages first. Cross-entropy plots for these settings are shown in fig:random and fig:swe. In both cases, the model degrades gracefully (or even improves) for a number of languages, but then degrades linearly (i.e. exponential growth of perplexity) with exponentially increasing number of languages. For comparison, fig:swesize compares this to the effect of decreasing the number of parameters in the LSTM by successively halving the hidden state size. Here the behavior is similar, but unlike the Swedish model which got somewhat better when closely related languages were added, the increase in cross-entropy is monotone. It would be interesting to investigate how the number of model parameters needs to be scaled up in order to accommodate the additional languages, but unfortunately the computational resources for such an experiment increases with the number of languages and would not be practical to carry out with our current equipment. Our first experiment tries to answer what happens when more and more languages are added to the model. There are two settings: adding languages in a random order, or adding the most closely related languages first.
What are the two settings in their first experiment?
Adding languages in a random order, or adding the most closely related languages first.
null
false
null
Which characters belong to DC or Marvel Universe? Deadpool or Deadstroke
Deadpool is Marvel, Deathstroke is DC
2002.06424
false
null
FLOAT SELECTED: Table 1: Optimal hyperparameters used for final training on the ADE and CoNLL04 datasets. FLOAT SELECTED: Table 1: Optimal hyperparameters used for final training on the ADE and CoNLL04 datasets.
How many shared layers are in the system?
1
null
false
null
Why do people like California?
California is popular for its moderate climate and diverse nature, ranging from. mountainous terrain to sandy beaches and everything in between.
null
false
null
Where is Chiria
Chiria is a census town in Pashchimi Singhbhum district in the state of Jharkhand, India. India's largest iron ore mine with reserves of 2 billion tonnes of iron ore is located here. The mine is operated by Steel Authority of India Limited.
null
false
null
Coffee production attracted immigrants in search of better economic opportunities in the early 1900s. Mainly, these were Portuguese, Italian, Spanish, German, and Japanese nationals. For instance, São Paulo received approximately 733,000 immigrants in the decade preceding 1900, whilst only receiving approximately 201,000 immigrants in the six years to 1890. The production yield of coffee increases. In 1880, São Paulo produced 1.2 million bags (25% of total production), in 1888 2.6 million (40%), and in 1902 8 million bags (60%). Coffee is then 63% of the country's exports. The gains made by this trade allow sustained economic growth in the country.
From the passage provided, extract the number of bags of coffee São Paulo produced in 1888.
São Paulo produced 2.6 million bags of coffee in 1888.
null
false
218
The standard approach to multilingual NLP is to design a single architecture, but tune and train a separate model for each language. While this method allows for customizing the model to the particulars of each language and the available data, it also presents a problem when little data is available: extensive language-specific annotation is required. The reality is that most languages have very little annotated data for most NLP tasks. ammar2016malopa found that using training data from multiple languages annotated with Universal Dependencies BIBREF1 , and represented using multilingual word vectors, outperformed monolingual training. Inspired by this, we apply the idea of training one model on multiple languages—which we call polyglot training—to PropBank-style semantic role labeling (SRL). We train several parsers for each language in the CoNLL 2009 dataset BIBREF0 : a traditional monolingual version, and variants which additionally incorporate supervision from English portion of the dataset. To our knowledge, this is the first multilingual SRL approach to combine supervision from several languages. The CoNLL 2009 dataset includes seven different languages, allowing study of trends across the same. Unlike the Universal Dependencies dataset, however, the semantic label spaces are entirely language-specific, making our task more challenging. Nonetheless, the success of polyglot training in this setting demonstrates that sharing of statistical strength across languages does not depend on explicit alignment in annotation conventions, and can be done simply through parameter sharing. We show that polyglot training can result in better labeling accuracy than a monolingual parser, especially for low-resource languages. We find that even a simple combination of data is as effective as more complex kinds of polyglot training. We include a breakdown into label categories of the differences between the monolingual and polyglot models. Our findings indicate that polyglot training consistently improves label accuracy for common labels. We train several parsers for each language in the CoNLL 2009 dataset: a traditional monolingual version, and variants which additionally incorporate supervision from English portion of the dataset.
What parsers do they train for each language in the CoNLL 2009 dataset?
A traditional monolingual version, and variants which additionally incorporate supervision from English portion of the dataset.
null
false
null
In 64th Annual Grammy Awards, best album of the year award was given to
Jon Batiste was awarded the best album of the year in 64th Annual Grammy Awards
null
false
null
List some ideas for gifts I could give to my sister on her birthday?
Chocolate A book you loved A cute top A floppy straw sun hat Take her out to brunch Print her out a book of photos of the two of you growing up Make a box of self-care items like good lotion, a sleep mask, etc. Give her a board game at a special party you arrange, at which to play it!
null
false
124
The task of speculation detection and scope resolution is critical in distinguishing factual information from speculative information. This has multiple use-cases, like systems that determine the veracity of information, and those that involve requirement analysis. This task is particularly important to the biomedical domain, where patient reports and medical articles often use this feature of natural language. This task is commonly broken down into two subtasks: the first subtask, speculation cue detection, is to identify the uncertainty cue in a sentence, while the second subtask: scope resolution, is to identify the scope of that cue. For instance, consider the example: It might rain tomorrow. The speculation cue in the sentence above is ‘might’ and the scope of the cue ‘might’ is ‘rain tomorrow’. Thus, the speculation cue is the word that expresses the speculation, while the words affected by the speculation are in the scope of that cue. This task was the CoNLL-2010 Shared Task (BIBREF0), which had 3 different subtasks. Task 1B was speculation cue detection on the BioScope Corpus, Task 1W was weasel identification from Wikipedia articles, and Task 2 was speculation scope resolution from the BioScope Corpus. For each task, the participants were provided the train and test set, which is henceforth referred to as Task 1B CoNLL and Task 2 CoNLL throughout this paper. For our experimentation, we use the sub corpora of the BioScope Corpus (BIBREF1), namely the BioScope Abstracts sub corpora, which is referred to as BA, and the BioScope Full Papers sub corpora, which is referred to as BF. We also use the SFU Review Corpus (BIBREF2), which is referred to as SFU. This subtask of natural language processing, along with another similar subtask, negation detection and scope resolution, have been the subject of a body of work over the years. The approaches used to solve them have evolved from simple rule-based systems (BIBREF3) based on linguistic information extracted from the sentences, to modern deep-learning based methods. The Machine Learning techniques used varied from Maximum Entropy Classifiers (BIBREF4) to Support Vector Machines (BIBREF5,BIBREF6,BIBREF7,BIBREF8), while the deep learning approaches included Recursive Neural Networks (BIBREF9,BIBREF10), Convolutional Neural Networks (BIBREF11) and most recently transfer learning-based architectures like Bidirectional Encoder Representation from Transformers (BERT) (BIBREF12). Figures FIGREF1 and FIGREF1 contain a summary of the papers addressing speculation detection and scope resolution (BIBREF13, BIBREF5, BIBREF9, BIBREF3, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF6, BIBREF11, BIBREF18, BIBREF10, BIBREF19, BIBREF7, BIBREF4, BIBREF8). Inspired by the most recent approach of applying BERT to negation detection and scope resolution (BIBREF12), we take this approach one step further by performing a comparative analysis of three popular transformer-based architectures: BERT (BIBREF20), XLNet (BIBREF21) and RoBERTa (BIBREF22), applied to speculation detection and scope resolution. We evaluate the performance of each model across all datasets via the single dataset training approach, and report all scores including inter-dataset scores (i.e. train on one dataset, evaluate on another) to test the generalizability of the models. This approach outperforms all existing systems on the task of speculation detection and scope resolution. Further, we jointly train on multiple datasets and obtain improvements over the single dataset training approach on most datasets. Contrary to results observed on benchmark GLUE tasks, we observe XLNet consistently outperforming RoBERTa. To confirm this observation, we apply these models to the negation detection and scope resolution task, and observe a continuity in this trend, reporting state-of-the-art results on three of four datasets on the negation scope resolution task. The rest of the paper is organized as follows: In Section 2, we provide a detailed description of our methodology and elaborate on the experimentation details. In Section 3, we present our results and analysis on the speculation detection and scope resolution task, using the single dataset and the multiple dataset training approach. In Section 4, we show the results of applying XLNet and RoBERTa on negation detection and scope resolution and propose a few reasons to explain why XLNet performs better than RoBERTa. Finally, the future scope and conclusion is mentioned in Section 5. we take this approach one step further by performing a comparative analysis of three popular transformer-based architectures: BERT, XLNet and RoBERTa, applied to speculation detection and scope resolution.
What architecture do they perform comparative analysis on?
BERT, XLNet, and RoBERTa.
2002.08126
false
null
In this paper, we propose an improved RNN-T model with language bias to alleviate the problem. The model is trained to predict language IDs as well as the subwords. To ensure the model can learn CS information, we add language IDs in the CS point of transcription, as illustrated in Fig. 1. In the figure, we use the arrangements of different geometric icons to represent the CS distribution. Compared with normal text, the tagged data can bias the RNN-T to predict language IDs in CS points. So our method can model the CS distribution directly, no additional LID model is needed. Then we constrain the input word embedding with its corresponding language ID, which is beneficial for model to learn the language identity information from transcription. In the inference process, the predicted language IDs are used to adjust the output posteriors. The experiment results on CS corpus show that our proposed method outperforms the RNN-T baseline (without language bias) significantly. Overall, our best model achieves 16.2% and 12.9% relative error reduction on two test sets, respectively. To our best knowledge, this is the first attempt of using the RNN-T model with language bias as an end-to-end CSSR strategy. In this paper, we propose an improved RNN-T model with language bias to alleviate the problem. The model is trained to predict language IDs as well as the subwords. To ensure the model can learn CS information, we add language IDs in the CS point of transcription, as illustrated in Fig. 1.
How do they obtain language identities?
The answers are shown as follows: * model is trained to predict language IDs as well as the subwords * we add language IDs in the CS point of transcriptio
null
false
null
Where is the Mall of America located?
The Mall of America is located in Bloomington, Minnesota, USA.
null
false
null
What is one year?
One Earth Year is the time it takes for one revolution of the Earth around the sun.
null
false
null
Which of these are contact sports - hockey, badminton, tennis, grid iron football
hockey and grid iron football are contact sports
null
false
247
In this [id=lq]work, we [id=lq]treat emotion cause extraction as a QA task and propose a new model based on deep memory networks for identifying [id=lq]the emotion causes for an emotion expressed in text. [id=lq]The key property of this approach is the use of context information in the learning process which is ignored in the original memory network. Our new [id=lq]memory network architecture is able [id=lq]to store context in different memory slots to capture context information [id=lq]in proper sequence by convolutional operation. Our model achieves the state-of-the-art performance on a dataset for emotion cause detection when compared to a number of competitive baselines. In the future, we will explore effective ways [id=lq]to model discourse relations among clauses and develop a QA system which can directly output the cause of emotions as answers. In the future, we will explore effective ways to model discourse relations among clauses and develop a QA system which can directly output the cause of emotions as answers.
What will the authors explore in the future?
In the future, the authors will explore effective ways to model discourse relations among clauses and develop a QA system which can directly output the cause of emotions as answers.
1809.05752
false
null
FLOAT SELECTED: Table 5: Overall and domain-specific Precision, Recall, and F1 scores for our models. The first row computes similarity directly from the TF-IDF matrix, as in (McCoy et al., 2015). All other rows are classifier outputs. FLOAT SELECTED: Table 5: Overall and domain-specific Precision, Recall, and F1 scores for our models. The first row computes similarity directly from the TF-IDF matrix, as in (McCoy et al., 2015). All other rows are classifier outputs.
What are their initial results on this task?
Achieved the highest per-domain scores on Substance (F1 ≈ 0.8) and the lowest scores on Interpersonal and Mood (F1 ≈ 0.5), and show consistency in per-domain performance rankings between MLP and RBF models.
1607.05408
false
null
We create the graph using all data, and training set tweets have an initial language label distribution. A naïve approach to building the tweet-tweet subgraph requires O( INLINEFORM0 ) comparisons, measuring the similarity of each tweet with all others. Instead, we performed INLINEFORM1 -nearest-neighbour classification on all tweets, represented as a bag of unigrams, and compared each tweet and the top- INLINEFORM2 neighbours. We use Junto (mad) BIBREF5 to propagate labels from labelled to unlabelled nodes. Upon convergence, we renormalise label scores for initially unlabelled nodes to find the value of INLINEFORM4 . We create the graph using all data, and training set tweets have an initial language label distribution. A naïve approach to building the tweet-tweet subgraph requires O( INLINEFORM0 ) comparisons, measuring the similarity of each tweet with all others. Instead, we performed INLINEFORM1 -nearest-neighbour classification on all tweets, represented as a bag of unigrams, and compared each tweet and the top- INLINEFORM2 neighbours. We use Junto (mad) BIBREF5 to propagate labels from labelled to unlabelled nodes. Upon convergence, we renormalise label scores for initially unlabelled nodes to find the value of INLINEFORM4 .
How are labels propagated using this approach?
The answers are shown as follows: * We use Junto (mad) BIBREF5 to propagate labels from labelled to unlabelled nodes.
null
false
null
The Broad Brook Canal is a water-supply canal feeding the Springfield Reservoir (a.k.a. the Ludlow Reservoir) in Ludlow, Massachusetts, a public water supply for the city of Springfield, Massachusetts. Its northern segment is also known as the Jabish Canal. The canal was first constructed circa 1875. It was 11,960 feet (3,650 m) in length, and fed the Springfield Reservoir at its eastern side with water collected in the Belchertown Reservoir and swamp. This water, however, was thought to be of poor quality, and thus in 1890–1891 the canal was extended north through the Belchertown Reservoir to Jabish Brook, east of Broad Brook. After this addition, the canal's total length was about 8 miles (13 km), of which 1,400 feet (430 m) ran through a cast-iron pipe (54 inches in diameter) across the Cherry Valley dam. The entire canal was worked to be 22 feet (6.7 m) wide at the surface, 8 feet (2.4 m) wide at the bottom, with a depth of 4.66 feet (1.42 m).
What is feeding Springfield Reservoir?
The Broad Brook Canal is a water supply canal feeding the Springfield Reservoir also known as Ludlow Reservoir. It is the main water supply for the city of Springfield, Massachusetts. The Broad Brook canal was constructed in 1875 and is 11,960 feet in length.
null
false
103
Despite the rapid progress of deep learning techniques on diverse supervised learning tasks, these models remain brittle to subtle shifts in the data distribution. Even when the permissible changes are confined to barely-perceptible perturbations, training robust models remains an open challenge. Following the discovery that imperceptible attacks could cause image recognition models to misclassify examples BIBREF0 , a veritable sub-field has emerged in which authors iteratively propose attacks and countermeasures. For all the interest in adversarial computer vision, these attacks are rarely encountered outside of academic research. However, adversarial misspellings constitute a longstanding real-world problem. Spammers continually bombard email servers, subtly misspelling words in efforts to evade spam detection while preserving the emails' intended meaning BIBREF1 , BIBREF2 . As another example, programmatic censorship on the Internet has spurred communities to adopt similar methods to communicate surreptitiously BIBREF3 . In this paper, we focus on adversarially-chosen spelling mistakes in the context of text classification, addressing the following attack types: dropping, adding, and swapping internal characters within words. These perturbations are inspired by psycholinguistic studies BIBREF4 , BIBREF5 which demonstrated that humans can comprehend text altered by jumbling internal characters, provided that the first and last characters of each word remain unperturbed. First, in experiments addressing both BiLSTM and fine-tuned BERT models, comprising four different input formats: word-only, char-only, word+char, and word-piece BIBREF6 , we demonstrate that an adversary can degrade a classifier's performance to that achieved by random guessing. This requires altering just two characters per sentence. Such modifications might flip words either to a different word in the vocabulary or, more often, to the out-of-vocabulary token UNK. Consequently, adversarial edits can degrade a word-level model by transforming the informative words to UNK. Intuitively, one might suspect that word-piece and character-level models would be less susceptible to spelling attacks as they can make use of the residual word context. However, our experiments demonstrate that character and word-piece models are in fact more vulnerable. We show that this is due to the adversary's effective capacity for finer grained manipulations on these models. While against a word-level model, the adversary is mostly limited to UNK-ing words, against a word-piece or character-level model, each character-level add, drop, or swap produces a distinct input, providing the adversary with a greater set of options. Second, we evaluate first-line techniques including data augmentation and adversarial training, demonstrating that they offer only marginal benefits here, e.g., a BERT model achieving $90.3$ accuracy on a sentiment classification task, is degraded to $64.1$ by an adversarially-chosen 1-character swap in the sentence, which can only be restored to $69.2$ by adversarial training. Third (our primary contribution), we propose a task-agnostic defense, attaching a word recognition model that predicts each word in a sentence given a full sequence of (possibly misspelled) inputs. The word recognition model's outputs form the input to a downstream classification model. Our word recognition models build upon the RNN-based semi-character word recognition model due to BIBREF7 . While our word recognizers are trained on domain-specific text from the task at hand, they often predict UNK at test time, owing to the small domain-specific vocabulary. To handle unobserved and rare words, we propose several backoff strategies including falling back on a generic word recognizer trained on a larger corpus. Incorporating our defenses, BERT models subject to 1-character attacks are restored to $88.3$ , $81.1$ , $78.0$ accuracy for swap, drop, add attacks respectively, as compared to $69.2$ , $63.6$ , and $50.0$ for adversarial training Fourth, we offer a detailed qualitative analysis, demonstrating that a low word error rate alone is insufficient for a word recognizer to confer robustness on the downstream task. Additionally, we find that it is important that the recognition model supply few degrees of freedom to an attacker. We provide a metric to quantify this notion of sensitivity in word recognition models and study its relation to robustness empirically. Models with low sensitivity and word error rate are most robust. Third (our primary contribution), we propose a task-agnostic defense, attaching a word recognition model that predicts each word in a sentence given a full sequence of (possibly misspelled) inputs.
What is the characteristic of the word recognition model?
Predicing each word in a sentence given a full sequence of (possibly misspelled) inputs.
null
false
null
Is Boston a good city to visit in the summer?
Yes, Boston is a great city to visit in the summer. The weather tends to be warm, and there are many historical attractions in the city including the Freedom Trail and Faneuil Hall, as well as museums like the Museum of Science, Museum of Fine Arts, and Isabella Stewart Gardner Museum. You can walk around Boston Common and the Public Garden, or along the waterfront in the Seaport District and the North End. You can kayak in the Charles River, and take in a baseball game at Fenway Park. There are many fine dining restaurants, and something for everyone to enjoy.
1908.06606
false
null
Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. In addition, clinical text often contains abundant healthcare information. CTS is able to provide large-scale extracted structured data for enormous down-stream clinical researches. However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). Researchers have to construct different models for it, which is already costly, and hence it calls for a lot of labeled data for each model. Moreover, labeling necessary amount of data for training neural network requires expensive labor cost. To handle it, researchers turn to some rule-based structuring methods which often have lower labor cost. To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text. For some cases, it is already the final answer in deed (e.g., extracting sub-string). While for other cases, it needs several steps to obtain the final answer, such as entity names conversion and negative words recognition. Our presented QA-CTS task unifies the output format of the traditional CTS task and make the training data shareable, thus enriching the training data. The main contributions of this work can be summarized as follows. Clinical text structuring (CTS) is a critical task for fetching medical research data from electronic health records (EHRs), where structural patient medical data, such as whether the patient has specific symptoms, diseases, or what the tumor size is, how far from the tumor is cut at during the surgery, or what the specific laboratory test result is, are obtained. It is important to extract structured data from clinical text because bio-medical systems or bio-medical researches greatly rely on structured data but they cannot obtain them directly. However, end-to-end CTS is a very challenging task. Different CTS tasks often have non-uniform output formats, such as specific-class classifications (e.g. tumor stage), strings in the original text (e.g. result for a laboratory test) and inferred values from part of the original text (e.g. calculated tumor size). To reduce the pipeline depth and break the barrier of non-uniform output formats, we present a question answering based clinical text structuring (QA-CTS) task (see Fig. FIGREF1). Unlike the traditional CTS task, our QA-CTS task aims to discover the most related text from original paragraph text.
How is the clinical text structuring task defined?
CTS is extracting structural data from medical research data (unstructured). Authors define QA-CTS task that aims to discover most related text from original text.
null
false
null
Luigi Fagioli is the oldest winner of a Formula One Grand Prix; he was 53 years and 22 days old when he won the 1951 French Grand Prix
Who is the oldest driver to win a Formula 1 race?
The oldest driver to win a Formula 1 race is the Italian named Luigi Fagioli while 53 years and 22 days old, with his only win at the 1951 France Grand Prix.
null
false
356
The Reuters Corpus Volume 2 BIBREF2 , in short RCV2, is a multilingual corpus with a collection of 487,000 news stories. Each news story was manually classified into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). Topic codes were assigned to capture the major subject of the news story. The entire corpus covers thirteen languages, i.e. Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish, written by local reporters in each language. The news stories are not parallel. Single-label stories, i.e. those labeled with only one topic out of the four top categories, are often used for evaluations. However, the class distributions vary significantly across all the thirteen languages (see Table 1 ). Therefore, using random samples to extract evaluation corpora may lead to very imbalanced test sets, i.e. undesired and misleading variability among the languages when the main focus is to evaluate cross-lingual transfer. The Reuters Corpus Volume 2 (Lewis et al., 2004), in short RCV21 , is a multilingual corpus with a collection of 487,000 news stories.
What is the Reuters Corpus Volume 2?
It is a multilingual corpus with a collection of 487,000 news stories.
null
false
null
What is predictive maintenance?
Predictive Maintenance is widely used in the manufacturing industry. It is all about figuring out when an asset should be maintained, and what specific maintenance activities need to be performed, based on an asset’s actual condition or state, rather than on a fixed schedule, so that you can maximize uptime and productivity. It is all about predicting & preventing failures and performing the right maintenance routines in order to reduce costly equipment downtimes.
null
false
null
Who is the best NBA player of all time?
This can be discussed but it is often admitted that Michael Jordan is the best NBA player of all time.
null
false
null
what is a scottish fold cat?
A scottish fold cat is a feline with a genetic mutation that causes its ears to fold down. Scottish fold cats are rare, but the breed is well-loved by many for their gentle demeanor and soft fur.
null
false
null
Is it better to ski or snowboard?
This depends on individual preference. Most people find skiing easier to pick up, so if you are a beginner to snow sports, skiing might be an excellent place to start. There are more skiers than snowboarders, so that some people might choose skiing for that reason. Additionally, there are ski-only resorts for skiers to visit but no snowboarder-only resorts, which gives people who ski a broader pool of mountains to visit. Snowboarders might say snowboarding is better because snowboard boots are more comfortable and easier to walk in.
null
false
null
My wife and I have two kids: one boy, age 4, and one girl, age 1. We are thinking of buying a pet. What are some good pet options for our family of four? Please include some advantages and disadvantages for each option.
Congratulations on your growing family! Pets are often treated like another member of the family and while they can require a lot of work, they can also bring a lot of joy and laughter into your life. Here are a few options for pets and some advantages and disadvantages for each: 1. Dogs - Dogs can be amazing pets. They are typically very loyal, friendly, and beloved by families all over the world. Dogs can be taught to do tricks like sit, stay, and fetch. There are also over 300 different breeds of dog and they come in all kinds of sizes, ranging from a few lbs (1-2 kg) to over 150 lbs (70 kg). Dogs require quite a bit of work to train and can be very expensive to own due to high veterinary bills. Some dogs can be aggressive, and this is an important factor to consider given that you have two young children. Dogs also typically require access to a yard or regular walking to ensure that they are not going to the bathroom inside. Also, many people are allergic to dogs due to the fact that most dogs shed. If your family members are prone to allergies and are still looking to add a dog to the family, perhaps you should consider a hypoallergenic breed like a Yorkshire Terrier, poodle, or a poodle mix like a Golden Doodle. Dogs typically live between 7-15 years, depending on the breed. 2. Cats - Cats are another very popular pet option for families. Like dogs, they come in many different breed options but there are far fewer breed options for cats than dogs. Most cats grow to be about 10 lbs (4 kg), but they can range in size from around 5 lbs (2 kg) to 25 lbs (11 kg). Indoor cats typically live between 12-18 years. Cats typically require far less work and are generally more independent than dogs. While most cats can't be trained to do tricks, cats are often lower maintenance than dogs. Cats can often be left alone at the house for a long time. Most cats use a litter box to relieve themselves so they typically don't need regular walks or access to a yard. Like dogs, most cats shed, so allergies and cleaning pet dander can be a nuisance. Cats typically eat less than dogs and often are less expensive to own. 3. Fish - Fish are a popular pet option for families that are looking for a very low maintenance pet that only requires feeding a couple of times per day. Fish typically require water, a clean fish tank and fish food, and that's about it. Some pet fish, like goldfish, can be bought for as little as $10, while exotic fish can cost hundreds or even thousands of dollars. Most pet fish will live up to 2 years if properly cared for, but certain species can live up to 30 years. 4. Reptiles - Reptiles like snakes, lizards, or iguanas can also be pets. They typically require low maintenance and can live for up to 20-30 years. They require a cage, regular feeding, and are generally easy to take care of. For medical emergencies, they may require specialized veterinarians or doctor's, which can be hard to find. 5. Rodents - rodents like gerbils, hamsters and even mice can be pets as well. They require regular feeding and a small cage, and are typically very easy to take care of. People often include a running wheel in the cage for these small animals so that they can exercise. However, they don't live very long; most rodents only live for a couple of years at most. In addition to the list above, there are many other options for pets such as guinea pigs, rabbits, pigs and even exotic options like monkeys. Depending on where you live, there may be certain restrictions on the types of pets that you can own. Please be sure to check with these restrictions before buying your family pet.