paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
The cup game is a children's clapping game that involves tapping and hitting a cup using a defined rhythm.
|
What is Cup Game?
|
The cup game is a children's clapping game that involves tapping and hitting a cup using a defined rhythm. The game can be played by many players seated around a table and is often played in large groups. Each player possesses a cup and in unison the players tap out the defined rhythm using their cups. It can be played competitively, where the rhythm speeds up in each "round", and a player making a mistake in the rhythm must drop out of the game, with a new "round" starting after each elimination, and game play continuing until only one player remains, with that player being the winner.
|
null | false
| null |
World records were not officially recognized by the IAAF until 1 January 2004; previously, the best times for the marathon were referred to as the 'world best'. Courses must conform to IAAF standards for a record to be recognized. However, marathon routes still vary greatly in elevation, course, and surface, making exact comparisons impossible. Typically, the fastest times are set over relatively flat courses near sea level, during good weather conditions and with the assistance of pacesetters.
The current world record time for men over the distance is 2 hours 1 minute and 09 seconds, set in the Berlin Marathon by Eliud Kipchoge of Kenya on 25 September 2022, an improvement of 30 seconds over the previous record also set in the Berlin Marathon by Eliud Kipchoge, on 16 September 2018.
The world record for women was set by Brigid Kosgei of Kenya in the Chicago Marathon on 13 October 2019, in 2 hours 14 minutes and 4 seconds who broke the record Paula Radcliffe of Great Britain had set over 16 years earlier at the London Marathon.
|
Given the reference text about the marathon, who currently holds the men's record in the marathon?
|
The current men's world record in the marathon is held by Eliud Kipchoge, who ran the Berlin Marathon on September 25th 2022 with a time of 2 hours 1 minute and 09 seconds.
|
null | false
| null |
Identify which instrument is string or percussion: Bendir, Rabel
|
Rabel is string, Bendir is percussion.
|
|
null | false
| null |
The series was often described as "a show about nothing." However, in 2014, Seinfeld stated: "The pitch for the show, the real pitch, when Larry and I went to NBC in 1988, was we want to show how a comedian gets his material. The show about nothing was just a joke in an episode many years later, and Larry and I to this day are surprised that it caught on as a way that people describe the show, because to us it's the opposite of that." David similarly commented: "I like taking the worst qualities that a person has and trying to make something funny out of it. Doesn’t everybody do terrible things and have terrible thoughts? Just by trying to be as funny, you’re going to deal with a lot of things that are real, so the show’s really about something. The whole thing about the show being about nothing is ridiculous." Much of the show's humor is based upon repeated use of irony, incongruity and (oftentimes unfortunate) coincidences.
|
What do Jerry Seinfeld and Larry David have to say about Seinfeld being called "a show about nothing"?
|
Larry and Jerry are surprised that the "a show about nothing" phrase caught on as a way that people describe the show, and they are not fond of that description of the show.
Larry David commented, "...the show’s really about something. The whole thing about the show being about nothing is ridiculous". Larry states that the show is about "taking the worst qualities that a person has and trying to make something funny out of it". Jerry mentioned that the show was about "how a comedian gets his material".
|
1908.10275
| false
| null |
From the description of music genres provided above emerges that there is a limited number of super-genres and derivation lines BIBREF19, BIBREF20, as shown in figure FIGREF1.
From a computational perspective, genres are classes and, although can be treated by machine learning algorithms, they do not include information about the relations between them. In order to formalize the relations between genres for computing purposes, we define a continuous genre scale from the most experimental and introverted super-genre to the most euphoric and inclusive one. We selected from Wikipedia the 77 genres that we mentioned in bold in the previous paragraph and asked to two independent raters to read the Wikipedia pages of the genres, listen to samples or artists of the genres (if they did not know already) and then annotate the following dimensions:
From the description of music genres provided above emerges that there is a limited number of super-genres and derivation lines BIBREF19, BIBREF20, as shown in figure FIGREF1.
From a computational perspective, genres are classes and, although can be treated by machine learning algorithms, they do not include information about the relations between them. In order to formalize the relations between genres for computing purposes, we define a continuous genre scale from the most experimental and introverted super-genre to the most euphoric and inclusive one. We selected from Wikipedia the 77 genres that we mentioned in bold in the previous paragraph and asked to two independent raters to read the Wikipedia pages of the genres, listen to samples or artists of the genres (if they did not know already) and then annotate the following dimensions:
|
How many genres did they collect from?
|
The answers are shown as follows:
* 77 genres
|
null | false
| null |
Irina Anatolyevna Aksyonova (also Aksenova, Russian: Ирина Анатольевна Аксёнова; born 24 September 1962) is a retired Russian swimmer. She won bronze medals in the 4 × 100 m medley relay at the 1978 World Aquatics Championships and 1980 Summer Olympics, where she swam for the Soviet Union team in the preliminaries. At the 1980 Olympics she also finished fourth in the 800 m, fifth in the 400 m and eighth in the 200 m freestyle events.
Between 1977 and 1980 she won seven national titles and set five national records in 200–800 m freestyle events. However, when her medley relay team finished third at the 1978 World Championships, she was swimming the 100 m butterfly leg. Currently she lives in Czech Republic.
|
Use the text to tell me what competitions Irina Aksyonova won or placed highly in.
|
Irina Aksyonova placed highly or won medals in many competitions, including the 1978 World Aquatics Championships, where she won a bronze medal, and the 1980 Summer Olympics where she won a bronze medal and also placed highly in the 800 meter, 400 meter, and 200 meter freestyle event. She also won third place in the 1978 World Championships. She also won seven national titles.
|
null | false
| null |
What is orange wine?
|
Orange wine is a type of wine made from white wine grapes where the grape skins are not removed as it is the case for white wine. The skins stay in contact with the juice for days or even months, which gives the wine its color, flavour and texture.
|
|
null | false
| null |
In mathematics, a metric space is a set together with a notion of distance between its elements, usually called points. The distance is measured by a function called a metric or distance function. Metric spaces are the most general setting for studying many of the concepts of mathematical analysis and geometry.
The most familiar example of a metric space is 3-dimensional Euclidean space with its usual notion of distance. Other well-known examples are a sphere equipped with the angular distance and the hyperbolic plane. A metric may correspond to a metaphorical, rather than physical, notion of distance: for example, the set of 100-character Unicode strings can be equipped with the Hamming distance, which measures the number of characters that need to be changed to get from one string to another.
Since they are very general, metric spaces are a tool used in many different branches of mathematics. Many types of mathematical objects have a natural notion of distance and therefore admit the structure of a metric space, including Riemannian manifolds, normed vector spaces, and graphs. In abstract algebra, the p-adic numbers arise as elements of the completion of a metric structure on the rational numbers. Metric spaces are also studied in their own right in metric geometry and analysis on metric spaces.
Many of the basic notions of mathematical analysis, including balls, completeness, as well as uniform, Lipschitz, and Hölder continuity, can be defined in the setting of metric spaces. Other notions, such as continuity, compactness, and open and closed sets, can be defined for metric spaces, but also in the even more general setting of topological spaces.
|
In the provided text multiple examples of a metric are given. Extract the third example of a metric.
|
The third metric or distance function that was encountered in this example is the Hamming distance. The Hamming distance measures the number of character changes that are needed to transform a given string to another.
|
null | false
| null |
The following words describe 'active' or 'passive' types of engagement. Identify which are 'active' and which are 'passive': create, consume, participate, build, make, watch, listen, and lead.
|
'Active' words: create, lead, participate, make, build
'Passive' words: watch, consume, listen
|
|
null | false
| null |
Electronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes, sensors and integrated circuits, and associated passive interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes digital switching possible, and electronics is widely used in information processing, telecommunications, and signal processing. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed components into a regular working system.
Today, most electronic devices use semiconductor components to perform electron control. The underlying principles that explain how semiconductors work are studied in solid state physics, whereas the design and construction of electronic circuits to solve practical problems are part of electronics engineering.
|
What does electronics deal with?
|
Electronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes, sensors and integrated circuits, and associated passive interconnection technologies.
|
null | false
| null |
Group these animals into 2-legged and 4-legged categories: kangaroo, dog, rhino, lion, giraffe, penguin, chimpanzee and cat.
|
Penguin, chimpanzee and kangaroo have 2 legs. Dog, cat, lion, rhino and giraffe have 4 legs.
|
|
null | false
| null |
Classify each of the following as inclined or flat: stairs, beach, mountain, lake, escalator, pond, ramp
|
Stairs: Inclined
Beach: Flat
Mountain: Inclined
Lake: Flat
Escalator: Inclined
Pond: Flat
Ramp: Inclined
|
|
null | false
| 121
|
To evaluate the effectiveness of our model, and demonstrate its flexibility at handling heavy data-structure made of several types of entities, we used the RotoWire dataset BIBREF10. It includes basketball games statistical tables paired with journalistic descriptions of the games, as can be seen in the example of Figure FIGREF1. The descriptions are professionally written and average 337 words with a vocabulary size of $11.3$K. There are 39 different record keys, and the average number of records (resp. entities) in a single data-structure is 628 (resp. 28). Entities are of two types, either team or player, and player descriptions depend on their involvement in the game. We followed the data partitions introduced with the dataset and used a train/validation/test sets of respectively $3,398$/727/728 (data-structure, description) pairs.
To evaluate the effectiveness of our model, and demonstrate its flexibility at handling heavy data-structure made of several types of entities, we used the RotoWire dataset
|
What dataset is used?
|
RotoWire dataset.
|
null | false
| null |
Identify which instrument is string or percussion: Paiban, Fegereng
|
Fegereng is string, Paiban is percussion.
|
|
null | false
| null |
Zangisheh (Persian: زنگيشه, also Romanized as Zangīsheh) is a village in Qarah Su Rural District, in the Central District of Kermanshah County, Kermanshah Province, Iran. At the 2006 census, its population was 153, in 34 families.
|
Based on this passage, what is the average number of people per family in Zangisheh?
|
I can only tell you what it was in 2006, because this is when the survey data provided is from. The average number of people per family at that time is the population divided by the number of families, or 153 / 34.
|
null | false
| null |
Yeol Eum Son (born May 2, 1986, in Wonju, South Korea) is a South Korean classical pianist. She first drew international attention in October 2004 at age 18 when she appeared as a soloist performing Liszt Piano Concerto No. 1 with the New York Philharmonic under the baton of Lorin Maazel on their Asia tour in Seoul, Daejeon, and Tokyo. Son again performed with Maazel and the New York Philharmonic when they returned to the Seoul Arts Center in February 2008, this time as soloist for Beethoven Piano Concerto No. 2.
Her awards include Silver Medals at both the Thirteenth Van Cliburn International Piano Competition (2009) and 2011 International Tchaikovsky Competition, where she also received the Best Chamber Concerto Performance and the Best Performance of the Commissioned Work prizes. She also won the Bronze Medal at the Arthur Rubinstein International Piano Master Competition (2005)
|
How good as a pianist is Yeol Eum Son
|
Yeol Eum Son (born May 2, 1986, in Wonju, South Korea) is a South Korean classical pianist. She first drew international attention in October 2004 at age 18 when she appeared as a soloist performing Liszt Piano Concerto No. 1 with the New York Philharmonic under the baton of Lorin Maazel on their Asia tour in Seoul, Daejeon, and Tokyo. Son again performed with Maazel and the New York Philharmonic when they returned to the Seoul Arts Center in February 2008, this time as soloist for Beethoven Piano Concerto No. 2.
Her awards include Silver Medals at both the Thirteenth Van Cliburn International Piano Competition (2009) and 2011 International Tchaikovsky Competition, where she also received the Best Chamber Concerto Performance and the Best Performance of the Commissioned Work prizes. She also won the Bronze Medal at the Arthur Rubinstein International Piano Master Competition (2005)
|
null | false
| null |
The Lions have had several division rivals in their existence. Their oldest rivals are the Chicago Bears and the Green Bay Packers, whom they have faced since 1930. The Minnesota Vikings have been in a division with Detroit ever since their inaugural season in 1961. Another notable longtime division opponent was the Tampa Bay Buccaneers (25 seasons from 1977 to 2001).
The Lions also have a preseason rivalry with the Cleveland Browns, dubbed the Great Lakes Classic. The two teams have been playing for The Barge Trophy since 2002. The Lions and Browns had a solid rivalry in the 1950s, when they met four times for the NFL championship (Detroit won three of the matchups); they have met much less frequently during the regular season since the 1970 AFL–NFL merger due to the Browns' move to the AFC.
|
Extract a list of rivalries with the Detroit Lions in bulleted form, sorted alphabetically.
|
Here is a list of rivalries with the Detroit Lions, sorted alphabetically:
* Chicago Bears
* Cleveland Browns
* Green Bay Packers
* Minnesota Vikings
* Tampa Bay Buccaneers
|
null | false
| null |
California is a state in the Western United States, located along the Pacific Coast. With nearly 39.2 million residents across a total area of approximately 163,696 square miles (423,970 km2), it is the most populous U.S. state and the third-largest by area. It is also the most populated subnational entity in North America and the 34th most populous in the world. The Greater Los Angeles and San Francisco Bay areas are the nation's second and fifth most populous urban regions respectively, with the former having more than 18.7 million residents and the latter having over 9.6 million. Sacramento is the state's capital, while Los Angeles is the most populous city in the state and the second most populous city in the country. San Francisco is the second most densely populated major city in the country. Los Angeles County is the country's most populous, while San Bernardino County is the largest county by area in the country. California borders Oregon to the north, Nevada and Arizona to the east, the Mexican state of Baja California to the south; and it has a coastline along the Pacific Ocean to the west.
|
Extract the total population and total area of California from the reference text
|
California has a population of 39.2 million residents and an area of 163,696 square miles.
|
null | false
| null |
Who is the Greatest male Tennis player of all time?
|
Here are few things to consider to come up with the player.
1. Player who has won most grand slams
2. Player who spent most time as world #1
3. Player who has won grand slams on all three surfaces (Hard, Clay, Grass)
4. Player who has won most year end ATP finals and also Olympics
5. Head to head wins between players
Based on this criteria, the greatest male tennis player of all time is Novak Djokovic. There are two other male players who are in contention: Roger Federer and Rafael Nadal. Novak and Nadal both have won 22 grand slams each. All three have a favorite surface (Grass for Roger, Clay for Nadal and Hard for Djokovic) but they won grand slams on all surfaces. Djokovic has the record of spending most time as world #1. He also has a better head-to-head record over Nadal and Federer. Djokovic and Roger share the record of winning most year end ATP Finals. Roger Federer kickstarted the golden age of Tennis and he is extremely popular for his skill, talent and grace with which he used to play. Roger won Wimbledon the highest number of times among all players. Nadal won French Open 14 times which is a record that potentially may never ever be broken. History will remember all three players for their contribution to Tennis and a lot of tennis fans are lucky to have been part of the Golden age of tennis.
|
|
null | false
| null |
Write the first paragraph of a blog post describing the appeal of gravel bikes in the bay area.
|
For many years, cycling has been a staple in the bay area. Its temperate climate, easy access to nature, and bike-friendly sentiment have fostered a steady growth of interest in the sport, especially with the rise of COVID creating a shift toward distanced outdoor activities in recent years. Traditionally, cycling has had three modalities: mountain biking, road biking, and cyclocross. Recently, a fourth modality, gravel biking, has had a rapid rise in popularity.
|
|
null | false
| null |
Information theory's fundamental contribution to natural language processing and computational linguistics was further established in 1951, in his article "Prediction and Entropy of Printed English", showing upper and lower bounds of entropy on the statistics of English – giving a statistical foundation to language analysis. In addition, he proved that treating whitespace as the 27th letter of the alphabet actually lowers uncertainty in written language, providing a clear quantifiable link between cultural practice and probabilistic cognition.
|
What were the main results of applying statistical analysis to the English language?
|
The main results of applying statistical analysis to the English language were establishing upper and lower bounds of entropy for it and that treating whitespace as a 27th letter of the alphabet lowers uncertainty.
|
null | false
| null |
Tennis is a racket sport that is played either individually against a single opponent (singles) or between two teams of two players each (doubles). Each player uses a tennis racket that is strung with cord to strike a hollow rubber ball covered with felt over or around a net and into the opponent's court. The object of the game is to manoeuvre the ball in such a way that the opponent is not able to play a valid return. The player who is unable to return the ball validly will not gain a point, while the opposite player will.
|
Who wins the point in tennis?
|
A player who can return the ball validly when the opposite player unable to return it.
|
null | false
| 377
|
We have randomly selected 150 problems out of the RTE corpus which were marked as “YES” (i.e. entailment holds). The problems were not further selected nor doctored by us. The problems were then re-rated by experts in logic and/or linguistics. For each problem, three experts were consulted, and each expert rated 30 problems. More precisely, the experts were instructed to re-consider each problem and be especially wary of missing hypotheses. If they considered the entailment to hold, we still gave the instruction to optionally mention any additional implicit hypothesis that they would be using. Similarly, if they considered that there was no entailment in the problem, they were given prompted to (optionally) give an argument for their judgement.
In order to facilitate data collection, the experts were chosen from the network of contacts of the author. Despite this method, the process of data collection took nearly six months. The authors themselves were put to contribution in the data-collection process (taking one set of 30 problems each) in order to complete the survey.
Additionally, collecting all the inputs received and using our best judgments, we have put together a test set of 150 problems comprised of the original problems, a new judgement (“yes” or “no”), and added missing hypotheses (if “yes” is a reasonable option).
The problems were then re-rated by experts in logic and/or linguistics. For each problem, three experts were consulted, and each expert rated 30 problems.
|
How to re-rated problems by experts in logic and/or linguistics?
|
For each problem, three experts were consulted, and each expert rated 30 problems.
|
null | false
| null |
After completing her fitting out, Aquila got underway from Bremerton, Washington, on 15 July 1982 in company with Taurus (PHM-3) and Frederick (LST-1184) and shaped a course for her home port, Key West, Florida. Frederick escorted the two guided-missile hydrofoil gunboats as far as Rodman in the Canal Zone where El Paso (LKA-117) took over. Aquila and her traveling companions transited the Panama Canal on 8 August and arrived in Key West three days later. The warship spent the remainder of 1982 conducting shakedown and other training out of her base at the Trumbo Point Annex, Naval Air Station, Key West, Florida. During the first quarter of 1983, post-shakedown repairs occupied her time. That spring, Aquila embarked upon a schedule of normal training duties out of Key West that occupied her time through the summer and into the all of 1983. From 10 to 20 October, she joined Taurus and Spartanburg County (LST-1192) for special operations in the Caribbean Sea.
|
What was the home port of the Aquila?
|
The Aquila home port was Trumbo Point Annex, Naval Air Station, Key West, Florida.
|
null | false
| 247
|
It is often the case that the meaning of a word is determined by its context, such as the previous word and the following word. [id=lq]Also, negations and emotion transitions are context sensitive. However, the memory network described in Section SECREF3 has only one memory slot with size INLINEFORM0 to represent a clause, where INLINEFORM1 is the dimension of a word embedding and INLINEFORM2 is the length of a clause. It means that when the memory network models a clause, it only considers each word separately.
In order to capture [id=lq]context information for clauses, we propose a new architecture which contains more memory slot to model the context with a convolutional operation. The basic architecture of Convolutional Multiple-Slot Memory Network (in short: ConvMS-Memnet) is shown in Figure 4.
Considering the text length is usually short in the dataset used here for emotion cause extraction, we set the size of the convolutional kernel to 3. That is, the weight of word INLINEFORM0 [id=lq]in the INLINEFORM1 -th position considers both the previous word INLINEFORM2 and the following word INLINEFORM3 by a convolutional operation: DISPLAYFORM0
For the first and the last word in a clause, we use zero padding, INLINEFORM0 , where INLINEFORM1 is the length of a clause. Then, the attention [id=lq]weightsignal for each word position in the clause is [id=lq]now defined as: DISPLAYFORM0
Note that we obtain the attention for each position rather than each word. It means that the corresponding attention for the INLINEFORM0 -th word in the previous convolutional slot should be INLINEFORM1 . Hence, there are three prediction output vectors, namely, INLINEFORM2 , INLINEFORM3 , INLINEFORM4 : DISPLAYFORM0
At last, we concatenate the three vectors as INLINEFORM0 for the prediction by a softmax function: DISPLAYFORM0
Here, the size of INLINEFORM0 is INLINEFORM1 . Since the prediction vector is a concatenation of three outputs. We implement a concatenation operation rather than averaging or other operations because the parameters in different memory slots can be updated [id=lq]respectively in this way by back propagation. The concatenation of three output vectors forms a sequence-level feature which can be used in the training. Such a feature is important especially [id=lq]when the size of annotated training data is small.
For deep architecture with multiple layer[id=lq]s training, the network is more [id=lq]complex (shown in Figure 5).
For the first layer, the query is an embedding of the emotion word, INLINEFORM0 .
In the next layer, there are three input queries since the previous layer has three outputs: INLINEFORM0 , INLINEFORM1 , INLINEFORM2 . So, for the INLINEFORM3 -th layer ( INLINEFORM4 ), we need to re-define the weight function (5) as:
In the last layer, [id=lq]the concatenation of the three prediction vectors form the final prediction vector to generate the answer.
For model training, we use stochastic gradient descent and back propagation to optimize the loss function. Word embeddings are learned using a skip-gram model. The size of the word embedding is 20 since the vocabulary size in our dataset is small. The dropout is set to 0.4.
Considering the text length is usually short in the dataset used here for emotion cause extraction, we set the size of the convolutional kernel to 3.
|
What is the size of the convolutional kernel?
|
The authors set the size of the convolutional kernel to 3.
|
null | false
| null |
How do I get to Juneau, Alaska?
|
There are no roads that lead directly to Juneau, Alaska. That leaves two options, by boat or by plane. Remote Alaskan villages and cities like Juneau are one of the reasons bush pilots are so prevalent in the state.
|
|
null | false
| 455
|
The first step of PIF is to construct substitutes for the per-person latent variables ρ i and per-item latent variables τ k by fitting latent variable models to the social network and yesterday's purchases. The idea is that the per-person variables ρ i and z i drive each observed connection a ij in the social network. If we fit a latent variable model to the observed social network that has this same structure, the inferred latent variables will contain some of the same information contained by the confounder ρ i . The same idea applies to the observed data of purchases from yesterday and the confounder τ k . To implement this strategy, we fit models of networks and matrix factorization methods.
The idea behind substitutes was developed in Wang and Blei (2019a), in the deconfounder algorithm for multivariate treatments. Here, we extend these ideas to the social influence setting, where the social network and yesterday's purchases ("treatments" in our setting) are both needed to capture the information in the confounders.
We illustrate the technical ideas behind substitutes by defining them for the per-person variable ρ i . We fit a latent variable model to the observed network a. Standard generative models for networks model the likelihood of observing a connection between person i and person j using Kdimensional latent variables c i and c j on nodes, often called community membership. For concreteness, we will focus on a generative model with a Poisson likelihood, which has been well-studied for modeling social networks . The model assumes that each connection in the network is conditionally independent given the latent variables,
We perform inference in this model over the latent variables c 1:n . Each inferred variable ĉi is a substitute for some of the per-person confounders ρ i . We call the variables ĉ1:n per-person substitutes.
Figure: (Left) The per-person confounder ρ i can be separated into traits u i that affect more than one connection in a i and in traits v i that affect a single connection in a i . (Right) The latent variable model posits the same conditional independence structure over a i as the causal model on the left.
Why are substitutes valid? Figure illustrates the justification behind per-person substitutes. Consider the connections a i made by person i, and for ease of explanation, suppose that there are no traits z i , which only affect the connections a i . Then, Figure shows that the perperson confounder ρ i can be separated into the traits u i (outlined in green) that affect more than one connection in the set a i and traits v i (outlined in red), which affect only one connection. The assumed causal model on the left in Figure implies that each connection a ij ∈ a i is conditionally independent given the per-person variable u i . That is,
Consider the generative model on the right in Figure. We see that it posits the same conditional independence of observed variables a i given the latent variables c i . Assume that the generative model well-approximates the empirical distribution of the network, captured by each row a i of the adjacency matrix. Then, the per-person variables c i that render each observation a ij conditionally independent of the others must coincide with the variables u i posited in the causal model in Figure.
We make this idea concrete with a formal result and proof in the appendix.
When does this not work? The per-person substitute ĉi cannot capture the confounder v i , which only affects one of the connection a i . This fact is because a per-person substitute ĉi can still render the observed connections a i conditionally independent without capturing the trait that affects a single connection. Put differently, such traits leave no observable implications in the data.
What information is captured? The per-person substitute ĉi contains information about both variables ρ i and z i , since both drive the connections a i and render them conditionally independent (Figure). However, only the per-person variables ρ i are confounders. To target ρ-specific information, we can instead fit a joint factor model,
which uses the community membership c to jointly model the network and yesterday's purchases.
We can construct per-person substitutes with inferred variables ĉi as before. We will compare both methods for constructing per-person substitutes in the empirical studies.
We will use the same ideas to find substitutes for the per-person confounders τ k that capture the attributes of an item k that cause people to purchase it (or not). As we noted, finding substitutes relies on models that posit the conditional independence of observed data given latent variables (e.g., Eq. ()). For heterogeneous data such as people's purchases of multiple products, one such a model is Poisson matrix factorization . Thus, to construct substitutes for some of the latent confounders τ k , we fit Poisson matrix factorization to yesterday's purchases,
where Q is the number of factors. After performing inference over the latent variables d 1:n and w 1:m , each inferred variable ŵk is a substitute for the subset of per-item variables τ k that affect more than one purchase in x k = {x 1k , . . . , x nk }. We call each variable ŵk a per-item substitute.
In this section, we will formally state a result about valid substitutes for per-person and per-item confounders.
Proposition 7.1 Let u j ⊂ {ρ j , z j } be the traits of person j that affect more than one connection in a j . Suppose there exists a variable c j so that the distribution p(a j | c j ) factorizes as,
for parameters η a j . Then, the variable c j contains the information that is contained in the set of user traits u j . Let τ k be the attributes of item k that affect more than one purchase in x k . Suppose there exists a variable w k so that the distribution p(x k | w k ) factorizes as,
for parameters η x k . Then, the variable w k contains the information contained in the variable τ k .
Proof Assume that a variable w k which satisfies the factorization above exists. Then, by definition, each pair of variables (x ik , x i ′ k ) in the set x k are d-separated by w k . Suppose that w k does not contain all the information contained in the confounders τ k , common causes of multiple variables in x k . That means, there exists some subset U ⊂ τ k that is not contained in w k but which are parents of at least two variables, (x ak , x bk ) in x k . But, we assumed that w k d-seperates all pairs of variables in x k . By contradiction, w k contains all the information contained in τ k . The same proof applies for the variable c j .
Poisson influence factorization (PIF) estimates social influence from observed social network data and and data from a behavior of interest, such as users purchasing items. PIF uses latent variable models of such data to construct substitutes for unobserved confounders: per-person variables that affect both connections and purchases, and per-item variables that drive purchases. PIF then uses substitutes as drop-in replacements when estimating social influence. We demonstrated the accuracy of PIF for estimating social influence on semi-synthetic datasets. In Section 7.6, we analyzed the sensitivity of PIF to assumption violations. Finally, we applied PIF to study real data from Last.fm.
Future work. In this paper, we made the assumption that influence only spreads behavior in a network from one time period to the next. One avenue of future work is modeling time-series influence. We also focused on Poisson factor models and generalized linear model of outcomes. Another avenue of future work is developing more flexible models of outcomes, networks and purchases. For example, it is worthwhile to study heterogeneous influence effects with outcome models that capture this variation. Finally, we only studied the estimation error of PIF empirically. It is important future work to establish technical results about estimation with finite samples and potentially mis-measured substitutes.
The first step of PIF is to construct substitutes for the per-person latent variables ρi and per-item latent variables τk by fitting latent variable models to the social network and yesterday’s purchases. The idea is that the per-person variables ρi and zi drive each observed connection aij in the social network. If we fit a latent variable model to the observed social network that has this same structure, the inferred latent variables will contain some of the same information contained by the confounder ρi . The same idea applies to the observed data of purchases from yesterday and the confounder τk. To implement this strategy, we fit models of networks (Holland et al., 1983; De Bacco et al., 2017; Ball et al., 2011; Contisciani et al., 2020; Peixoto, 2019; Gopalan and Blei, 2013; Hoff, 2008) and matrix factorization methods (Lee and Seung, 2001; Gopalan et al., 2013, 2015). The idea behind substitutes was developed in Wang and Blei (2019a), in the deconfounder algorithm for multivariate treatments. Here, we extend these ideas to the social influence setting, where the social network and yesterday’s purchases (“treatments” in our setting) are both needed to capture the information in the confounders. ****We also focused on Poisson factor models and generalized linear model of outcomes.Another avenue of future work is developing more flexible models of outcomes, networks and purchases. For example, it is worthwhile to study heterogeneous influence effects with outcome models that capture this variation. Finally, we only studied the estimation error of PIF empirically. It is important future work to establish technical results about estimation with finite samples and potentially mis-measured substitutes.
|
"It seems there is no strong motivation behind the poisson factorization model, can we replace it with other types of matrix factorization or neural networks?"
|
The motivation behind using Poisson community models and Poisson matrix factorization is that they are well-studied, have been successful choices for modeling network and heterogeneous data such as purchases, and admit efficient inference algorithms [1, 2, 3]. We discussed this in section 3.1. We agree that it would be an interesting area of future research to fit neural models of outcomes; we highlighted it as an area of future work in our discussion in section 5.
References:
[1] Ball, B., Karrer, B. and Newman, M.E., 2011. Efficient and principled method for detecting communities in networks. Physical Review E, 84(3), p.036103.
[2] Gopalan, P.K. and Blei, D.M., 2013. Efficient discovery of overlapping communities in massive networks. Proceedings of the National Academy of Sciences, 110(36), pp.14534-14539.
[3] Gopalan, P., Hofman, J.M. and Blei, D.M., 2015, July. Scalable Recommendation with Hierarchical Poisson Factorization. In UAI.
|
null | false
| null |
Tell me whether these are found at the beach or on a mountains: ocean, ski resort, chair lift, shell, seaweed, hiking trail
|
Ocean: Beach
Ski Resort: Mountain
Chair Lift: Mountain
Shell: Beach
Seaweed: Beach
Hiking Trail: Mountain
|
|
2003.06651
| false
| null |
The dataset consists of a set of polysemous words: 20 nouns, 20 verbs, and 10 adjectives and specifies 20 to 100 contexts per word, with the total of 4,664 contexts, drawn from the Open American National Corpus. Given a set of contexts of a polysemous word, the participants of the competition had to divide them into clusters by sense of the word. The contexts are manually labelled with WordNet senses of the target words, the gold standard clustering is generated from this labelling.
The task allows two setups: graded WSI where participants can submit multiple senses per word and provide the probability of each sense in a particular context, and non-graded WSI where a model determines a single sense for a word in context. In our experiments we performed non-graded WSI. We considered the most suitable sense as the one with the highest cosine similarity with embeddings of the context, as described in Section SECREF9.
The dataset consists of a set of polysemous words: 20 nouns, 20 verbs, and 10 adjectives and specifies 20 to 100 contexts per word, with the total of 4,664 contexts, drawn from the Open American National Corpus. Given a set of contexts of a polysemous word, the participants of the competition had to divide them into clusters by sense of the word. The contexts are manually labelled with WordNet senses of the target words, the gold standard clustering is generated from this labelling.
The task allows two setups: graded WSI where participants can submit multiple senses per word and provide the probability of each sense in a particular context, and non-graded WSI where a model determines a single sense for a word in context. In our experiments we performed non-graded WSI. We considered the most suitable sense as the one with the highest cosine similarity with embeddings of the context, as described in Section SECREF9.
|
How are the different senses annotated/labeled?
|
The answers are shown as follows:
* The contexts are manually labelled with WordNet senses of the target words
|
null | false
| 267
|
Before training the neural network with the labeled tweets, it is necessary to apply the proper tokenization to every tweet. In this way, the text corpus is split into word elements, taking white spaces and the various punctuation symbols used in the language into account. This was done using the Moses package for machine translation.
We choose to limit the maximum size of each tweet to be considered during training to 30 words, and padded tweets of shorter size with zeros. Next, tweets are converted into vectors using word-based frequency, as described in Section SECREF5 . To feed the various classifiers in our evaluation, we attach the feature values onto every tweet vector.
In this work we experimented with various combinations of attached features INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 that express the user tendency. The details of each experiment, including the resulting size of each embedding can be found in Table TABREF10 , with the latter denoted `input dimension' in the table.
Before training the neural network with the labeled tweets, it is necessary to apply the proper tokenization to every tweet. In this way, the text corpus is split into word elements, taking white spaces and the various punctuation symbols used in the language into account. This was done using the Moses1 package for machine translation.
|
What do they use for machine translation?
|
Moses package.
|
1905.01962
| false
| null |
We use the additional INLINEFORM0 training articles labeled by publisher as an unsupervised data set to further train the BERT model.
We first investigate the impact of pre-training on BERT-BASE's performance. We then compare the performance of BERT-BASE with BERT-LARGE. For both, we vary the number of word-pieces from each article that are used in training. We perform tests with 100, 250 and 500 word pieces.
Next, we further explore the impact of sequence length using BERT-LARGE. The model took approximately 3 days to pre-train when using 4 NVIDIA GeForce GTX 1080 Ti. On the same computer, fine tuning the model on the small training set took only about 35 minutes for sequence length 100. The model's training time scaled roughly linearly with sequence length. We did a grid search on sequence length and learning rate.
We use the additional INLINEFORM0 training articles labeled by publisher as an unsupervised data set to further train the BERT model.
We first investigate the impact of pre-training on BERT-BASE's performance.
On the same computer, fine tuning the model on the small training set took only about 35 minutes for sequence length 100.
|
How are the two different models trained?
|
They pre-train the models using 600000 articles as an unsupervised dataset and then fine-tune the models on small training set.
|
null | false
| null |
How large is the Earth?
|
The Earth's radius is 6378 kilometers, and its mass is 5.972 x 10^24 kilograms. Its surface area is about 510 million square kilometers.
|
|
null | false
| null |
What is the capital city of Malaysia?
|
The capital city of Malaysia is Kuala Lumpur.
|
|
null | false
| null |
Sociocultural anthropology/ethnology:
This field can trace its roots to processes of European colonization and globalization, when European trade with other parts of the world and eventual political control of overseas territories offered scholars access to different cultures. Anthropology was the scientific discipline that searches to understand human diversity, both culturally and biologically. Originally anthropology focused on understanding groups of people then considered "primitive" or "simple" whereas sociology focused on modern urban societies in Europe and North America although more recently cultural anthropology looks at all cultures around the world, including those in developed countries. Over the years, sociocultural anthropology has influenced other disciplines like urban studies, gender studies, ethnic studies and has developed a number of sub-disciplines like medical anthropology, political anthropology, environmental anthropology, applied anthropology, psychological anthropology, economic anthropology and others have developed.
|
What is the origin of ethnology?
|
Ethnology can trace its roots to European colonization and globalization.
|
null | false
| null |
Zhevakhov joined the military service as a cadet of the Ukrainian hussar regiment in 1775 and received his first major military experience in fighting with the Trans-Kubanian Circassians in 1777. He then fought in the Russo-Turkish War (1787–1792) and was wounded when storming Ochakov in 1789. Having fought in Poland in 1792, Zhevakhov was promoted to colonel in 1800 and then participated in the wars against Napoleonic France from 1805 to 1807.
During the 1812 French invasion of Russia, he commanded the Serpukhov dragoon regiment within the 3rd Army of the West and contributed to the Russian success in the July 1812 battles of Kobryn and Gorodechna, which halted the French advance toward Kiev. He then led cavalry regiments in Osten-Sacken’s corps and participated in the 1813-14 campaign against Napoleon. On 8 April 1813 he was promoted to the rank of major-general. Zhevakhov resigned due to health problems in 1817 and henceforth lived in Odessa where he died in 1837.
|
When did Ivan Zhevakhov joined the military?
|
Ivan Zhevakhov joined the military service as a cadet of the Ukrainian hussar regiment in 1775 and received his first major military experience in fighting with the Trans-Kubanian Circassians in 1777.
|
null | false
| null |
Why is the sky blue?
|
The earth's sky is blue due to how the gases in the earth's atmosphere scatter the short electromagnetic wavelengths of the sun's blue light while allowing red, orange and yellow light to pass through. The scientific name for this phenomenon is Rayleigh scattering.
|
|
null | false
| 246
|
The major advantage of our system is that we only leverage NLTK tokenization and a POS tagger to preprocess our training dataset. When implementing our neural network based clinical information extraction system, we found it is not easy to construct high quality training data due to the noisy format of clinical notes. Choosing the proper tokenizer is quite important for span identification. After several experiments, we found "RegexpTokenizer" can match our needs. This tokenizer can generate spans for each token via sophisticated regular expression like below,
The major advantage of our system is that we only leverage NLTK2 tokenization and a POS tagger to preprocess our training dataset. When implementing our neural network based clinical information extraction system, we found it is not easy to construct high quality training data due to the noisy format of clinical notes.
|
Why it is not easy to construct high-quality training data?
|
The authors found that it is not easy to construct high quality training data due to the noisy format of clinical notes.
|
null | false
| 294
|
Although internet users accept unilateral contracts such as terms of service on a regular basis, it is well known that these users rarely read them. Nonetheless, these are binding contractual agreements. A recent study suggests that up to 98% of users do not fully read the terms of service before accepting them BIBREF0 . Additionally, they find that two of the top three factors users reported for not reading these documents were that they are perceived as too long (`information overload') and too complicated (`difficult to understand'). This can be seen in Table TABREF3 , where a section of the terms of service for a popular phone app includes a 78-word paragraph that can be distilled down to a 19-word summary.
The European Union's BIBREF1 , the United States' BIBREF2 , and New York State's BIBREF3 show that many levels of government have recognized the need to make legal information more accessible to non-legal communities. Additionally, due to recent social movements demanding accessible and transparent policies on the use of personal data on the internet BIBREF4 , multiple online communities have formed that are dedicated to manually annotating various unilateral contracts.
We propose the task of the automatic summarization of legal documents in plain English for a non-legal audience. We hope that such a technological advancement would enable a greater number of people to enter into everyday contracts with a better understanding of what they are agreeing to. Automatic summarization is often used to reduce information overload, especially in the news domain BIBREF5 . Summarization has been largely missing in the legal genre, with notable exceptions of judicial judgments BIBREF6 , BIBREF7 and case reports BIBREF8 , as well as information extraction on patents BIBREF9 , BIBREF10 . While some companies have conducted proprietary research in the summarization of contracts, this information sits behind a large pay-wall and is geared toward law professionals rather than the general public.
In an attempt to motivate advancement in this area, we have collected 446 sets of contract sections and corresponding reference summaries which can be used as a test set for such a task. We have compiled these sets from two websites dedicated to explaining complicated legal documents in plain English.
Rather than attempt to summarize an entire document, these sources summarize each document at the section level. In this way, the reader can reference the more detailed text if need be. The summaries in this dataset are reviewed for quality by the first author, who has 3 years of professional contract drafting experience.
The dataset we propose contains 446 sets of parallel text. We show the level of abstraction through the number of novel words in the reference summaries, which is significantly higher than the abstractive single-document summaries created for the shared tasks of the Document Understanding Conference (DUC) in 2002 BIBREF11 , a standard dataset used for single document news summarization. Additionally, we utilize several common readability metrics to show that there is an average of a 6 year reading level difference between the original documents and the reference summaries in our legal dataset.
In initial experimentation using this dataset, we employ popular unsupervised extractive summarization models such as TextRank BIBREF12 and Greedy KL BIBREF13 , as well as lead baselines. We show that such methods do not perform well on this dataset when compared to the same methods on DUC 2002. These results highlight the fact that this is a very challenging task. As there is not currently a dataset in this domain large enough for supervised methods, we suggest the use of methods developed for simplification and/or style transfer.
In this paper, we begin by discussing how this task relates to the current state of text summarization and similar tasks in Section SECREF2 . We then introduce the novel dataset and provide details on the level of abstraction, compression, and readability in Section SECREF3 . Next, we provide results and analysis on the performance of extractive summarization baselines on our data in Section SECREF5 . Finally, we discuss the potential for unsupervised systems in this genre in Section SECREF6 .
We hope that such a technological advancement would enable a greater number of people to enter into everyday contracts with a better understanding of what they are agreeing to.
|
What do the authors want their technology to be used for?
|
They hope it can enable a greater number of people to enter into everyday contracts with a better understanding of what they are agreeing to.
|
null | false
| null |
Asia is the largest continent on Earth. It covers 9% of the Earth's total surface area (or 30% of its land area), and has the longest coastline, at 62,800 kilometres (39,022 mi). Asia is generally defined as comprising the eastern four-fifths of Eurasia. It is located to the east of the Suez Canal and the Ural Mountains, and south of the Caucasus Mountains (or the Kuma–Manych Depression) and the Caspian and Black Seas. It is bounded on the east by the Pacific Ocean, on the south by the Indian Ocean and on the north by the Arctic Ocean. Asia is subdivided into 49 countries, five of them (Georgia, Azerbaijan, Russia, Kazakhstan and Turkey) are transcontinental countries lying partly in Europe. Geographically, Russia is partly in Asia, but is considered a European nation, both culturally and politically.
The Gobi Desert is in Mongolia and the Arabian Desert stretches across much of the Middle East. The Yangtze River in China is the longest river in the continent. The Himalayas between Nepal and China is the tallest mountain range in the world. Tropical rainforests stretch across much of southern Asia and coniferous and deciduous forests lie farther north.
|
Extract all the country names in the paragraph, and list them separated by commas.
|
Georgia, Azerbaijan, Russia, Kazakhstan, Turkey, Nepal, China
|
null | false
| null |
Give me some ideas on where to travel for good sun and beaches.
|
There are several places around the world that you could visit for good exposure of the sun and to enjoy the sandy beaches, for example:
1- Egypt, well known for sunny days most of the days, and good sandy beaches in Hurghada and North Coast of Egypt.
2- Dubai, also known for the good sandy beach and sunny days, however it might get very hot at times especially in summer.
3- Barcelona in Spain is also very good option, that you might like
|
|
null | false
| null |
What is the difference between a Fender Strat and Gibson Les Paul guitar?
|
The main difference is the pickups used in each of these guitars. Fender Strat has single coil pickups which give it a brighter sound while the Gibson Les Paul uses humbucker pickups which give it a more thicker, deeper sound. Eric Clapton mostly uses Strats while Slash of Guns and Roses fame uses a les paul.
|
|
null | false
| 489
|
Problem. We apply HARI to modeling a human-assisted navigation (HAN) problem. In HAN, a human requests an agent to find an object in an indoor environment. Each task request asks the agent to go to a room of type r and find an object of type o (e.g., find a mug in a kitchen). The agent is equipped with a camera and shares its camera view with the human. We assume that the human is sufficiently familiar with the environment that they can recognize the agent's location by looking at its current view. Before issuing a task request, the human imagines a goal location (but do not reveal it to the agent). We are primarily interested in evaluating success in goal-finding, i.e. whether the agent can arrive at the human's intended goal location. Even though there could be multiple locations that match a request, the agent only succeeds if it arrives exactly at the chosen goal location. We also determine success in request-fulfilling, where the agent successfully fulfills a request if it navigates to any node that is within two meters of an object that matches the request.
While an agent is performing a task, it may request the human to provide additional information via telecommunication (e.g., a chat app). Specifically, it can ask for a description of its current location (CUR), the goal location (GOAL), or a subgoal location that is on the path from its current location to the goal location (SUB). Detail about how the subgoals are determined is in the Appendix.
Operation Environment. We construct the operation environments using the environment graphs provided by the Matterport3D simulator. Each environment graph is generated from a 3D model of a house where each node is a location in the house and each edge connects two nearby unobstructed locations. Each operation state s corresponds to a node in the graph. At any time, the agent's operation action space A consists of traversing to any of the nodes that are adjacent to its current node.
We employ a discrete bag-of-features representation for state descriptions. 3 A bag of features represents the information that the agent extracts from the raw input that the agent perceives (e.g., an image, a language sentence). Working with this intermediate input allows us to easily vary the type and amount of information given to the agent. Specifically, we simulate two settings of descriptions: dense and sparse. At evaluation time, the agent perceives sparse descriptions and request the assistant for dense descriptions. A dense description of a current location contains the room name at the location, and the features of M objects restricted to be within δ meters of the location. The features of each object consists of (i) its name, (ii) horizontal and vertical angles (relative to the current viewpoint), and (iii) distance (in meters) from the object to the current location. A dense description of a goal follows the same representation scheme. In the sparse setting, the current-location description does not include the room name. Moreover, we remove the features of objects that are not in the top 100 most frequent objects, emulating an imperfect object detector module. The sparse goal description (the task request) has only features of the target object and the room name where the object is located at. Especially, if a subgoal location is adjacent or coincides with the agent's current location, instead of describing room and object features, the human specifies the ground-truth action to go to the subgoal (an action is specified by its horizontal and vertical angles, and travel distance). 4 Experimental Procedure. We conduct our experiments in three phases. In the pre-training phase, we learn an operation policy π with dense descriptions of the current location and the goal. In the training phase, the agent perceives a sparse description of its current location and is given a sparse initial goal description. We use advantage actor-critic to learn an interaction policy ψ θ that controls how the agent communicates with the human and navigates in an environment. The human always returns dense descriptions. The interaction policy is trained in environments that are previously seen as well as unseen during pre-training. Finally, in the evaluation phase, the interaction policy is tested on three conditions: seen environment and target object type but starting from a new room (UNSEENSTR), seen environment but new target object type (UNSEENOBJ), and new environment (UNSEENENV).
Problem. We apply HARI to modeling a human-assisted navigation (HAN) problem. In HAN, a human requests an agent to find an object in an indoor environment. Each task request asks the agent to go to a room of type r and find an object of type o (e.g., find a mug in a kitchen). The agent is equipped with a camera and shares its camera view with the human. We assume that the human is sufficiently familiar with the environment that they can recognize the agent’s location by looking at its current view. Before issuing a task request, the human imagines a goal location (but do not reveal it to the agent). We are primarily interested in evaluating success in goal-finding, i.e. whether the agent can arrive at the human’s intended goal location. Even though there could be multiple locations that match a request, the agent only succeeds if it arrives exactly at the chosen goal location. We also determine success in request-fulfilling, where the agent successfully fulfills a request if it navigates to any node that is within two meters of an object that matches the request. While an agent is performing a task, it may request the human to provide additional information via telecommunication (e.g., a chat app). Specifically, it can ask for a description of its current location (CUR), the goal location (GOAL), or a subgoal location that is on the path from its current location to the goal location (SUB). Detail about how the subgoals are determined is in the Appendix.
|
Page 2: states that agent has only a partial description of a goal state. So, does the uncertainty in this task reduce to uncertainty about the goal state? Toward the end of the paper I discovered that the uncertainty actually comes from missing environment annotations. This needs to be clarified.
|
The agent also has imperfect information about its *current state*, not just the goal state (sec 5).
|
null | false
| null |
What are the supplies needed for block printing?
|
Block printing is an art form where the artist carves a block (most commonly vinyl or rubber) and uses ink to press the image as a print. The supplies needed for this are the block to carve, a carving tool, a roller for ink application, ink, paper or material for the image. A smooth, flat surface is needed to roll out the ink as well; a small sheet of glass or plexiglass works well for this.
|
|
null | false
| null |
What is trekking ? Discuss various types of trekking.
|
Trekking refers to travel by foot through remote destinations that are in most cases inaccessible and unknown to visitors.Trekking is mainly of four kinds:
1 Easy Trekking: This type of activity if for beginners and basically involves the view of colorful horizons or sight of beautiful nature.
2 Moderate Trekking: Moderate trekking is more challenging than easy trekking. It is performed on medium to slightly higher altitudes. The individuals doing this type of trekking should be energetic.
3 Strenuous Trekking: This type of trekking requires strenuous physical efforts. It is basically done on high altitude regions and is suitable only for those who have some previous experience of trekking.
4 Difficult Trekking: Fourth type of trekking is done on steep slopes. Generally professional or experienced trekkers do this type of trekking. This type of trekking requires high level of precision skill and endurance by the trekker.
|
|
null | false
| null |
Primates diverged from other mammals about 85 million years ago (mya), in the Late Cretaceous period, with their earliest fossils appearing over 55 mya, during the Paleocene. Primates produced successive clades leading to the ape superfamily, which gave rise to the hominid and the gibbon families; these diverged some 15–20 mya. African and Asian hominids (including orangutans) diverged about 14 mya. Hominins (including the Australopithecine and Panina subtribes) parted from the Gorillini tribe (gorillas) between 8–9 mya; Australopithecine (including the extinct biped ancestors of humans) separated from the Pan genus (containing chimpanzees and bonobos) 4–7 mya. The Homo genus is evidenced by the appearance of H. habilis over 2 mya, while anatomically modern humans emerged in Africa approximately 300,000 years ago.
|
Summarize the key developments in human evolution based on the paragraph below
|
1) Primates diverged from other mammals 85 million years ago
2) The hominid family emerged 15-20 million years ago.
3) Hominins parted from gorillas between 8 and 9 million years ago
4) Australopithecines separated from the Pan genus 4-7 million years ago
5) The Homo genus appeared over 2 million years ago
6) Anatomically modern humans emerged in Afirca 300,000 years ago
|
null | false
| null |
How to file tax for 2022?
|
The last day to submit 2022 tax returns or to file an extension is Apr 18th, 2023. You can use the free tools IRS provided on their website to fill in required forms. Commerical softwares like Turbotax and H&R Block can ease some of the pain by providing a user friend UI with prompts and offers some consultancy help in the app. If you have complicated income structure, for example, self-employed or with multi-streams of investment income, consider finding a reliable CPA and let the professionals help with the tax.
|
|
null | false
| 471
|
We never preserve previous data in the queue, to ensure the model training is sequentially conducted on disjoint chunks.For the possibly used data replay strategy, another special replay buffer is used to preserve a small fraction of data from each previous chunk, like 10% in our experiments.
|
Or is it simply preserved and enqueued/dequeued as usual?
|
We never preserve previous data in the queue, to ensure the model training is sequentially conducted on disjoint chunks.For the possibly used data replay strategy, another special replay buffer is used to preserve a small fraction of data from each previous chunk, like 10% in our experiments. We have added these details in Appendix A.2.
|
null | false
| null |
What is heterosis?
|
Heterosis, hybrid vigor, or outbreeding enhancement is the improved or increased function of any biological quality in a hybrid offspring. An offspring is heterotic if its traits are enhanced as a result of mixing the genetic contributions of its parents. The heterotic offspring often has traits that are more than the simple addition of the parents' traits, and can be explained by Mendelian or non-Mendelian inheritance. Typical heterotic/hybrid traits of interest in agriculture are higher yield, quicker maturity, stability, drought tolerance, etc.
|
|
null | false
| null |
What are some good date ideas for vegans?
|
Tour an animal rescue, dine at a local vegan restaurant, visit the art museum, explore a botanical garden, plant-based picnic in the park
|
|
2003.03612
| false
| null |
While these changes do happen, they are generally quite rare. Most binomials — frozen or not — are ordered in one way about the same percentage of the time, regardless of community or the year. We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values. We find that there is less variation across time and communities in the data compared to this model, implying that binomial orderings are indeed remarkably stable.
We develop a null model to determine how much variation in binomial orderings we might expect across communities and across time, if binomial orderings were randomly ordered according to global asymmetry values.
|
What new model is proposed for binomial lists?
|
The answers are shown as follows:
* null model
|
null | false
| 113
|
Back to 42 BC, the philosopher Cicero has raised the issue that although there were many Oratory classes, there were none for Conversational skills BIBREF0 . He highlighted how important they were not only for politics, but also for educational purpose. Among other conversational norms, he claimed that people should be able to know when to talk in a conversation, what to talk depending on the subject of the conversation, and that they should not talk about themselves.
Norms such as these may become social conventions and are not learnt at home or at school. Social conventions are dynamic and may change according to context, culture and language. In online communication, new commonsense practices are evolved faster and accepted as a norm BIBREF1 , BIBREF2 . There is not a discipline for that on elementary or high schools and there are few linguistics researchers doing research on this field.
On the other hand, within the Artificial Intelligence area, some Conversational Systems have been created in the past decades since the test proposed by Alan Turing in 1950. The test consists of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from that of a human BIBREF3 . Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses. Since then, many systems have been created to pass the Turing's test. Some of them have won prizes, some not BIBREF4 . Although in this paper we do not focus on creating a solution that is able to build conversational systems that pass the Turing's test, we focus on NDS. From BIBREF5 , "NDS are systems that try to improve usability and user satisfaction by imitating human behavior". We refer to Conversational Systems as NDS, where the dialogues are expressed as natural language texts, either from artificial intelligent agents (a.k.a. bots) or from humans.
That said, the current popular name to systems that have the ability to make a conversation with humans using natural language is Chatbot. Chatbots are typically used in conversational systems for various practical purposes, including customer service or information acquisition. Chatbots are becoming more widely used by social media software vendors. For example, Facebook recently announced that it would make Facebook Messenger (its 900-million-user messaging app by 2016), into a full-fledged platform that allows businesses to communicate with users via chatbots. Google is also building a new mobile-messaging service that uses artificial intelligence know-how and chatbot technology. In addition, according to the Wall Street Journal, there are more than 2 billion users of mobile apps. Still, people can be reluctant to install apps. So it is believed that social messaging can be a platform and chatbots may provide a new conversational interface for interacting with online services, as chatbots are easier to build and deploy than apps BIBREF6 .
China seems to be the place where chatbots adoption and use is most advanced today. For example, China's popular WeChat messaging platform can take payments, scan QR codes, and integrate chatbot systems. WeChat integrates e-mail, chat, videocalls and sharing of large multimedia files. Users can book flights or hotels using a mixed, multimedia interaction with active bots. WeChat was first released in 2011 by Tecent, a Chinese online-gaming and social-media firm, and today more than 700 million people use it, being one of the most popular messaging apps in the world (The Economist 2016). WeChat has a mixture of real-live customer service agents and automated replies (Olson 2016).
Still, current existing chatbot engines do not properly handle a group chat with many users and many chatbots. This makes the chatbots considerably less social, which is a problem since there is a strong demand of having social chatbots that are able to provide different kinds of services, from traveling packages to finance advisors. This happens because there is a lack of methods and tools to design and engineer the coordination and mediation among chatbots and humans, as we present in Sections 2 and 3. In this paper, we refer to conversational systems that are able to interact with one or more people or chatbots in a multi-party chat as MPCS. Altogether, this paper is not meant to advance the state of the art on the norms for MPCS. Instead, the main contributions of this paper are threefold:
We then present some discussion and future work in the last section.
"NDS are systems that try to improve usability and user satisfaction by imitating human behavior".
|
What is the NDS?
|
NDS are systems that try to improve usability and user satisfaction by imitating human behavior.
|
null | false
| null |
When did the Portuguese royal family move to Brazil?
|
The Portuguese royal family, along with its court, servants and a library with more than 60,000 books, settled in Brazil between 1808 and 1821, having the first wave of migration starting on November 29th of 1807.
|
|
null | false
| null |
How will Artificial Intelligence change the education industry in the future?
|
If used correctly, Artificial Intelligence will enable the ability to provide differentiated and hyper-individualized educational experiences for the entirety of the individual's life. By analyzing curriculum, Artificial Intelligence will cater to each student's learning style, accelerating their ability to fully understand and retain the desired material, all while identifying programs with the highest potential for academic success.
|
|
null | false
| null |
Bernoulli's principle is a key concept in fluid dynamics that relates pressure, speed and height. Bernoulli's principle states that an increase in the speed of a fluid occurs simultaneously with a decrease in static pressure or a decrease in the fluid's potential energy. The principle is named after the Swiss mathematician and physicist Daniel Bernoulli, who published it in his book Hydrodynamica in 1738.Although Bernoulli deduced that pressure decreases when the flow speed increases, it was Leonhard Euler in 1752 who derived Bernoulli's equation in its usual form. The principle is only applicable for isentropic flows: when the effects of irreversible processes (like turbulence) and non-adiabatic processes (e.g. thermal radiation) are small and can be neglected.
Bernoulli's principle can be derived from the principle of conservation of energy. This states that, in a steady flow, the sum of all forms of energy in a fluid is the same at all points that are free of viscous forces. This requires that the sum of kinetic energy, potential energy and internal energy remains constant. Thus an increase in the speed of the fluid—implying an increase in its kinetic energy (dynamic pressure)—occurs with a simultaneous decrease in (the sum of) its potential energy (including the static pressure) and internal energy. If the fluid is flowing out of a reservoir, the sum of all forms of energy is the same because in a reservoir the energy per unit volume (the sum of pressure and gravitational potential ρ g h) is the same everywhere.
Bernoulli's principle can also be derived directly from Isaac Newton's second Law of Motion. If a small volume of fluid is flowing horizontally from a region of high pressure to a region of low pressure, then there is more pressure behind than in front. This gives a net force on the volume, accelerating it along the streamline.
Fluid particles are subject only to pressure and their own weight. If a fluid is flowing horizontally and along a section of a streamline, where the speed increases it can only be because the fluid on that section has moved from a region of higher pressure to a region of lower pressure; and if its speed decreases, it can only be because it has moved from a region of lower pressure to a region of higher pressure. Consequently, within a fluid flowing horizontally, the highest speed occurs where the pressure is lowest, and the lowest speed occurs where the pressure is highest.
|
How can Bernoulli's principle be derived from Newton's second law of motion?
|
Bernoulli's principle can be derived directly from Isaac Newton's second Law of Motion. If a small volume of fluid is flowing horizontally from a region of high pressure to a region of low pressure, then there is more pressure behind than in front. This gives a net force on the volume, accelerating it along the streamline.
|
null | false
| null |
why is boba tea so popular now in the US?
|
First of all, boba tea is delicious! Like coffee, many people are tea lovers and need one or two brews everyday to boost their energy. To add flavor to your favorite tea, mix it with brown sugar, dairy or non-dairy milk. That creates a large variety of milk teas. Boba tea recipes can be extremely creative; new items come out almost monthly. For toppings, the classical choices are tapioca boba, herbal jelly, fruity pop pearls, fruity flavor jelly, egg pudding, almond pudding, and red bean paste.... Milk foams, ice cream, Oreo cookies are now part of the innovative new items. When walking into a boba tea shop, you will be amazed by the wide range of choices! Secondly, just like coffee shops, boba tea shops are now chill locations for friends gathering, social meetup, study or a work afternoon. The decors inside tend to be instagram-able and attracts young folks to check in and post awesome pictures on their social media accounts.
|
|
null | false
| null |
A computer worm is a standalone malware computer program that replicates itself in order to spread to other computers. It often uses a computer network to spread itself, relying on security failures on the target computer to access it. It will use this machine as a host to scan and infect other computers. When these new worm-invaded computers are controlled, the worm will continue to scan and infect other computers using these computers as hosts, and this behaviour will continue. Computer worms use recursive methods to copy themselves without host programs and distribute themselves based on exploiting the advantages of exponential growth, thus controlling and infecting more and more computers in a short time. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer.
Many worms are designed only to spread, and do not attempt to change the systems they pass through. However, as the Morris worm and Mydoom showed, even these "payload-free" worms can cause major disruption by increasing network traffic and other unintended effects.
History
The actual term "worm" was first used in John Brunner's 1975 novel, The Shockwave Rider. In the novel, Nichlas Haflinger designs and sets off a data-gathering worm in an act of revenge against the powerful men who run a national electronic information web that induces mass conformity. "You have the biggest-ever worm loose in the net, and it automatically sabotages any attempt to monitor it. There's never been a worm with that tough a head or that long a tail!"
The second ever computer worm was devised to be an anti-virus software. Named Reaper, it was created by Ray Tomlinson to replicate itself across the ARPANET and delete the experimental Creeper program (the first computer worm, 1971).
On November 2, 1988, Robert Tappan Morris, a Cornell University computer science graduate student, unleashed what became known as the Morris worm, disrupting many computers then on the Internet, guessed at the time to be one tenth of all those connected. During the Morris appeal process, the U.S. Court of Appeals estimated the cost of removing the worm from each installation at between $200 and $53,000; this work prompted the formation of the CERT Coordination Center and Phage mailing list. Morris himself became the first person tried and convicted under the 1986 Computer Fraud and Abuse Act.
|
Given this paragraph about Computer Worm history, tell me the name of the earliest computer worm that caused damages to many computers on the Internet and the date when it was unleashed by his creator.
|
The first computer worm publicly known to cause real disruptions and malfunctioning to computers connected to the Internet is known to be the Morris worm, created in 1988 by Robert Morris, a computer science student at Cornell University.
|
null | false
| 501
|
One of the things that caught our attention is superior performance of NUQ on ImageNet, given that it has very similar results with DDU on CIFAR-100. One of our hypotheses was that embeddings have more complex and multi-modal distribution for more complex Imagenet dataset compared to simpler CIFAR-100. To check this, we made t-SNE based embeddings of out-of-distribution and test data (see Figure 5). While we understand the limitation of this type of visualization, the ImageNet embeddings appear to be much more irregular compared to well-shaped clusters for CIFAR-100. Because of that the modelling of the class with single Gaussian (in DDU) might not work very well for ImageNet. NUQ approach performs the modelling of distributions in much more flexible way which is beneficial for approximation of complex distributions. We hypothesize that this is the reason of the NUQ’s superior performance.
|
To what extent is NUQ fundamentally different from DUQ?(yes, using a sum of kernels rather than a mixture of Gaussians, depending on how many Gaussians are fitted, of course) And the results seem to be very similar between the two in table 1.
|
Fundamentally, you either fit a Gaussian for class or have a very flexible nonparametric model. As we show in Appendix A.5 the embedding structure for ImageNet seems to be much more complicated than just Gaussians. The other point is that we do not require the retraining of base network as in DUQ. Finally, our method is sound from statistical point of view and we expect to coplement the study with statistical analysis in future works.
|
null | false
| null |
I’m walking to work and I’d like to carry helpful things in my backpack for a big presentation. Which of these should I pack: rotary telephone, eighty inch monitor, laptop, notepad, beer keg, pen.
|
The laptop, notepad, and pen might be helpful in order to retrieve and share content or take notes during the presentation. The eighty inch monitor won’t fit in a backpack, although it might be helpful if they have one at your presentation. The beer keg and rotary telephone probably won’t be necessary for a presentation.
|
|
1905.10810
| false
| null |
Published work on language correction for Polish dates back at least to 1970s, when simplest Levenshtein distance solutions were used for cleaning mainframe inputs BIBREF5 , BIBREF6 . Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 .
Spelling correction tests described in literature have tended to focus on one approach applied to a specific corpus. Limited examples include works on spellchecking mammography reports and tweets BIBREF7 , BIBREF4 . These works emphasized the importance of tailoring correction systems to specific problems of corpora they are applied to. For example, mammography reports suffer from poor typing, which in this case is a repetitive work done in relative hurry. Tweets, on the other hand, tend to contain emoticons and neologisms that can trick solutions based on rules and dictionaries, such as LanguageTool. The latter is, by itself, fairly well suited for Polish texts, since a number of extensions to the structure of this application was inspired by problems with morphology of Polish language BIBREF3 .
|
Which specific error correction solutions have been proposed for specialized corpora in the past?
|
The answers are shown as follows:
* spellchecking mammography reports and tweets BIBREF7 , BIBREF4
|
null | false
| null |
Give me a bunch of ways I can keep my house cool in the summer
|
- Install whole house AC that's integrated with your heating system
- Buy an in-window unit to keep an individual room cool
- Install a whole house fan that pulls in cool air from open windows at night into living area and exhausts the hot air into the attic
- Place fans throughout the house but place them in front of open windows at night to pull in cool air
- Install a swamp cooler, it's sometimes cheaper than installing a full AC system
- Make sure to close windows and curtains in the morning hours after the house cools down over night
- Get a cooler filled with ice and place a fan into it forcing cold air into the room. The ice will melt fast so be ready to refill often. Not the most efficient but in a pinch it can help
|
|
null | false
| null |
Who is John D. Rockefeller?
|
John Davison Rockefeller Sr. (July 8, 1839 – May 23, 1937) was an American business magnate and philanthropist. He has been widely considered the wealthiest American of all time and the richest person in modern history. Rockefeller was born into a large family in Upstate New York that moved several times before eventually settling in Cleveland. He became an assistant bookkeeper at age 16 and went into several business partnerships beginning at age 20, concentrating his business on oil refining. Rockefeller founded the Standard Oil Company in 1870. He ran it until 1897 and remained its largest shareholder.
Rockefeller's wealth soared as kerosene and gasoline grew in importance, and he became the richest person in the country, controlling 90% of all oil in the United States at his peak. Oil was used throughout the country as a light source until the introduction of electricity, and as a fuel after the invention of the automobile. Furthermore, Rockefeller gained enormous influence over the railroad industry which transported his oil around the country. Standard Oil was the first great business trust in the United States. Rockefeller revolutionized the petroleum industry and, through corporate and technological innovations, was instrumental in both widely disseminating and drastically reducing the production cost of oil. His company and business practices came under criticism, particularly in the writings of author Ida Tarbell.
|
|
null | false
| null |
Give me a bulleted list of the 5 most recent world cup winners and the location of the tournament
|
Here are the 5 most recent World Cup winners and the location of the tournament:
* Argentina 2022 FIFA World Cup in Qatar
* France 2018 FIFA World Cup in Russia
* Germany 2014 FIFA World Cup in Brazil
* Spain 2010 FIFA World Cup in South Africa
* Italy 2006 FIFA World Cup in Germany
|
|
null | false
| null |
Amtrak California utilizes a livery and logo that is different from the standard Amtrak colors. All state-contracted Thruway Motorcoach and state-owned locomotives and passenger cars, with the exception of equipment used on the "Pacific Surfliner" route, are painted in the "California Colors" of blue and yellow. Each permanently assigned passenger car is named after a geographical feature of California.
The "Capitol Corridor" and "San Joaquin" trains are the two routes most identified with the Amtrak California image, since they extensively use "California Colored" equipment. The other state-supported rail route, the "Pacific Surfliner", uses a unique blue and silver paint scheme that is different from the other Amtrak California-branded trains.
The NPCUs that operate on the Capitol Corridor and San Joaquin are painted differently from the locomotives, as they are painted in the old "Caltrain" locomotive livery, with an Amtrak California and Caltrans logo on the sides. The single level cars, both the Comet cars and the Horizon dinettes, have three stripes closest to the boarding doors, orange, blue and green, which resemble the NJ Transit livery that was previously used on the Comet cars.
https://en.wikipedia.org/wiki/Amtrak_California
|
From this passage, extract the names of the three Amtrak California routes and separate them with commas.
|
Capitol Corridor, San Joaquin, and Pacific Surfliner
|
null | false
| null |
Identify which car manufacturer is Chinese or American: RAM, FAW
|
RAM is American, FAW is Chinese
|
|
null | false
| 268
|
Spoken language understanding is a challenging and difficult task BIBREF3. All problems present in natural language understanding are significantly exacerbated by several factors related to the characteristics of spoken language.
Firstly, each ASR engine introduces a mixture of systematic and stochastic errors which are intrinsic to the procedure of transcription of spoken audio. The quality of transcription, as measured by the popular word error rate (WER), attains the level of 5%-15% WER for high quality ASR systems for English BIBREF4, BIBREF5, BIBREF6, BIBREF7. The WER highly depends on the evaluation data difficulty and the speed to accuracy ratio. Importantly, errors in the transcription appear stochastically, both in audio segments which carry important semantic information, as well as in inessential parts of the conversation.
Another challenge stems from the fact that the structure of conversation changes dramatically when a human assumes an agency in the other party. When humans are aware that the other party is a machine (as is the case in dialogue chatbot interfaces), they tend to speak in short, well-structured turns following the subject-verb-object (SVO) sentence structure with a minimal use of relative clauses BIBREF8. This structure is virtually nonexistent in spontaneous speech, where speakers allow for a non-linear flow of conversation. This flow is further obscured by constant interruptions from backchannel or cross-talk utterances, repetitions, non-verbal signaling, phatic expressions, linguistic and non-linguistic fillers, restarts, and ungrammatical constructions.
A substantial, yet often neglected difficulty, stems from the fact that most SLU tasks are applied to transcript segments representing a single turn in a typical scripted conversation. However, turn-taking in unscripted human-human conversations is far more haphazard and spontaneous than in scripted dialogues or human-bot conversations. As a result, a single logical turn may span multiple ASR segments and be interwoven with micro-turns of the other party or the contrary - be part of a larger segment containing many logical turns.
ASR transcripts lack punctuation, normalization, true-casing of words, and proper segmentation into phrases as these features are not present in the conversational input BIBREF9. These are difficult to correct as the majority of NLP algorithms have been trained and evaluated on text and not on the output of an ASR system. Thus, a simple application of vanilla NLP tools to ASR transcripts seldom produces actionable and useful results.
Finally, speech based interfaces are defined by a set of dimensions, such as domain and vocabulary (retail, finance, entertainment), language (English, Spanish), application (voice search, personal assistant, information retrieval), and environment (mobile, car, home, distant speech recognition). These dimensions make it very challenging to provide a low cost domain adaptation.
Last but not least, production ASR systems impose strict constraints on the additional computation that can be performed. Since we operate in a near real-time environment, this precludes the use of computationally expensive language models which could compensate for some of the ASR errors.
ASR transcripts lack punctuation, normalization, true-casing of words, and proper segmentation into phrases as these features are not present in the conversational input (Zelasko et al. ˙ , 2018). These are difficult to correct as the majority of NLP algorithms have been trained and evaluated on text and not on the output of an ASR system. Thus, a simple application of vanilla NLP tools to ASR transcripts seldom produces actionable and useful results.
|
Have the majority of NLP algorithms been trained and evaluated on text or the output of an ASR system?
|
On text.
|
1911.13087
| false
| null |
The BD-4SK-ASR Dataset ::: The Language Model
We created the language from the transcriptions. The model was created using CMUSphinx in which (fixed) discount mass is 0.5, and backoffs are computed using the ratio method. The model includes 283 unigrams, 5337 bigrams, and 6935 trigrams.
The BD-4SK-ASR Dataset ::: The Language Model
We created the language from the transcriptions. The model was created using CMUSphinx in which (fixed) discount mass is 0.5, and backoffs are computed using the ratio method. The model includes 283 unigrams, 5337 bigrams, and 6935 trigrams.
|
What are the results of the experiment?
|
They were able to create a language model from the dataset, but did not test.
|
1905.00563
| false
| null |
Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. Formally, to find the incorrect triple $\langle s^{\prime }, r^{\prime }, o\rangle $ in the neighborhood of the train triple $\langle s, r, o\rangle $ , we need to find the triple $\langle s^{\prime },r^{\prime },o\rangle $ that results in the least change $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)$ when removed from the graph.
if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data.
Here, we demonstrate another potential use of adversarial modifications: finding erroneous triples in the knowledge graph. Intuitively, if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data. Formally, to find the incorrect triple $\langle s^{\prime }, r^{\prime }, o\rangle $ in the neighborhood of the train triple $\langle s, r, o\rangle $ , we need to find the triple $\langle s^{\prime },r^{\prime },o\rangle $ that results in the least change $\Delta _{(s^{\prime },r^{\prime })}(s,r,o)$ when removed from the graph.
|
How is this approach used to detect incorrect facts?
|
The answers are shown as follows:
* if there is an error in the graph, the triple is likely to be inconsistent with its neighborhood, and thus the model should put least trust on this triple. In other words, the error triple should have the least influence on the model's prediction of the training data.
|
null | false
| null |
What are Arabic numerals?
|
Arabic numerals refers to the ten digits commonly used today to represent numbers. The digits are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. They were adopted by European mathematicians around the 10th century C.E. from numerals used by Arabic speakers in North Africa. The numerals are used in a positional or place-value notation which makes complex calculations by humans easier.
|
|
1809.08652
| true
| null |
In this work, we use the datasets released by BIBREF1 and HEOT dataset provided by BIBREF0 . The datasets obtained pass through these steps of processing: (i) Removal of punctuatios, stopwords, URLs, numbers, emoticons, etc. This was then followed by transliteration using the Xlit-Crowd conversion dictionary and translation of each word to English using Hindi to English dictionary. To deal with the spelling variations, we manually added some common variations of popular Hinglish words. Final dictionary comprised of 7200 word pairs. Additionally, to deal with profane words, which are not present in Xlit-Crowd, we had to make a profanity dictionary (with 209 profane words) as well. Table TABREF3 gives some examples from the dictionary.
Both the HEOT and BIBREF1 datasets contain tweets which are annotated in three categories: offensive, abusive and none (or benign). Some examples from the dataset are shown in Table TABREF4 . We use a LSTM based classifier model for training our model to classify these tweets into these three categories. An overview of the model is given in the Figure FIGREF12 . The model consists of one layer of LSTM followed by three dense layers. The LSTM layer uses a dropout value of 0.2. Categorical crossentropy loss was used for the last layer due to the presence of multiple classes. We use Adam optimizer along with L2 regularisation to prevent overfitting. As indicated by the Figure FIGREF12 , the model was initially trained on the dataset provided by BIBREF1 , and then re-trained on the HEOT dataset so as to benefit from the transfer of learned features in the last stage. The model hyperparameters were experimentally selected by trying out a large number of combinations through grid search.
In this work, we use the datasets released by BIBREF1 and HEOT dataset provided by BIBREF0 .
Both the HEOT and BIBREF1 datasets contain tweets which are annotated in three categories: offensive, abusive and none (or benign)
|
Do they perform some annotation?
|
No.
|
null | false
| null |
Write a short story about a person who discovers a hidden room in their house. The story should include a plot twist and a clear resolution at the end.
|
Joelle bought a new house in her dream neighborhood nestled in the hills. The house was very old but beautiful and the perfect setting for Joelle to live the quiet life she was wishing for. She worked at the same busy job for 50 years and needed somewhere gentle for retirement. The day she moved in she brought all her things from her apartment and her old cat named Trixi. When Trixi explored the house, she searched each room on the bottom floor but wouldn't walk up the stairs to the second floor. Joelle assumed it was due to her old age and didn't force Trixi up the stairs. Then one day, Joelle came home from the store and couldn't find Trixi anywhere, she was shocked. She ran around the bottom floor of the house and eventually went upstairs to her bedroom defeated. She tripped over a loose blank in the floor and suddenly a trap door opened and she pummeled into darkness. Joelle was shocked and when her eyes adjusted she realized the trap door led to a secret staircase in the house. The staircase led to an old wine cellar. That turned out to be the best day of Joelle's life because she was able to auction off all the old wine bottles and artifacts and make millions of dollars.
|
|
null | false
| null |
Why is Kylie Jenner famous?
|
Due to her older half-sister, Kim Kardashian. Kim let out a sex tape with her ex boyfriend in the early 2000s that went viral. At the same time, Kim and her family were filming the first season of their reality tv show - Keeping Up with the Kardashians. At the time, Kylie was in elementary school. Growing up in the spot light for the remainder of her childhood as well as launching a cosmetics line that became a billion dollar business, made Kylie famous.
|
|
null | false
| 6
|
We explored improving the performance of neural abstractive summarizers when applied to the low resource domain of student reflections using three approaches: domain transfer, data synthesis and the combination of both. For domain transfer, state of the art abstractive summarization model was pretrained using out-of-domain data (CNN/DM), then tuned using in-domain data (student reflections). The process of tuning improved ROUGE scores on the student reflection data, and at the same time produced more readable summaries. To incorporate synthetic data, we proposed a new template based synthesis model to synthesize new summaries. We showed that enriching the training data with this synthesized data can further increase the benefits of using domain transfer / tuning to increase ROUGE scores. We additionally showed that the proposed synthesis model outperformed a word replacement synthesis baseline. Future plans include trying domain adaptation, enhancing the synthesising process by using other models, further exploring template-based methods, and extending the analysis of the synthesis model to cover other types of data like reviews and opinions.
Future plans include trying domain adaptation, enhancing the synthesising process by using other models, further exploring template-based methods, and extending the analysis of the synthesis model to cover other types of data like reviews and opinions.
|
What are the plans of the authors for the future?
|
Future plans include trying domain adaptation, enhancing the synthesising process by using other models, further exploring template-based methods, and extending the analysis of the synthesis model to cover other types of data like reviews and opinions.
|
null | false
| null |
What is the FA Cup?
|
The Football Association Challenge Cup, more commonly known as the FA Cup, is an annual knockout football competition in men's domestic English football. First played during the 1871–72 season, it is the oldest national football competition in the world. It is organised by and named after The Football Association (The FA). Since 2015, it has been known as The Emirates FA Cup after its headline sponsor. A concurrent Women's FA Cup has been held since 1970.
The competition is open to all eligible clubs down to Level 9 of the English football league system with Level 10 clubs acting as stand-ins in the event of non-entries from above. Included in the competition are 20 professional clubs in the Premier League (level 1), 72 professional clubs in the English Football League (levels 2 to 4), and all clubs in steps 1–5 of the National League System (levels 5 to 9) as well as a tiny number of step 6 clubs acting as stand-ins for non-entries above. A record 763 clubs competed in 2011–12. The tournament consists of 12 randomly drawn rounds followed by the semi-finals and the final. Entrants are not seeded, although a system of byes based on league level ensures higher ranked teams enter in later rounds – the minimum number of games needed to win, depending on which round a team enters the competition, ranges from six to fourteen.
The first six rounds are the Qualifying Competition, from which 32 teams progress to the first round of the Competition Proper, meeting the first of the 48 professional teams from Leagues One and Two. The last entrants are the Premier League and Championship clubs, into the draw for the Third Round Proper. In the modern era, only one non-League team has ever reached the quarter-finals, and teams below Level 2 have never reached the final. As a result, significant focus is given to the smaller teams who progress furthest, especially if they achieve an unlikely "giant-killing" victory.
Winners receive the FA Cup trophy, of which there have been two designs and five actual cups; the latest is a 2014 replica of the second design, introduced in 1911. Winners also qualify for the UEFA Europa League and a place in the upcoming FA Community Shield. Arsenal are the most successful club with fourteen titles and Arsène Wenger is the most successful manager in the history of the competition, having won seven finals as Arsenal's manager.
Liverpool are the current holders, having beaten Chelsea 6–5 on penalties following a 0–0 draw in the 2022 final.
|
|
null | false
| null |
Get Back is the first studio album by Australian rock and roll band The Basics. Released on 21 March 2003, the album consists of material composed by members Wally de Backer and Kris Schroeder. The album features De Backer, Schroeder, and original lead guitarist Michael Hubbard.
|
Given this paragraph about the rock band The Basics, what is notable about the Get Back album?
|
Get Back was their first studio album.
|
null | false
| 344
|
Neural machine translation (NMT) exploits neural networks to directly learn to transform sentences from a source language to a target language BIBREF0 , BIBREF1 . Universal multilingual NMT discovered that a neural translation system can be trained on datasets containing source and target sentences in multiple languages BIBREF2 , BIBREF3 . Successfully trained models using this approach can be used to translate arbitrarily between any languages included in the training data. In low-resource scenarios, multilingual NMT has proven to be an extremely useful regularization method since each language direction benefits from the information of the others BIBREF4 , BIBREF5 .
An important research focus of multilingual NMT is zero-shot translation (ZS), or translation between languages included in multilingual data for which no directly parallel training data exists. Application-wise, ZS offers a faster and more direct path between languages compared to pivot translation, which requires translation to one or many intermediate languages. This can result in large latency and error propagation, common issues in non-end-to-end pipelines.From a representation learning point of view, there is evidence of NMT's ability to capture language-independent features, which have proved useful for cross-lingual transfer learning BIBREF6 , BIBREF7 and provide motivation for ZS translation. However it is still unclear if minimizing the difference in representations between languages is beneficial for zero-shot learning.
On the other hand, the current neural architecture and learning mechanisms of multilingual NMT is not geared towards having a common representation. Different languages are likely to convey the same semantic content with sentences of different lengths BIBREF8 , which makes the desiderata difficult to achieve. Moreover, the loss function of the neural translation model does not favour having sentences encoded in the same representation space regardless of the source language. As a result, if the network capacity is large enough, it may partition itself into different sub-spaces for different language pairs BIBREF9 .
Our work here focuses on the zero-shot translation aspect of universal multilingual NMT. First, we attempt to investigate the relationship of encoder representation and ZS performance. By modifying the Transformer architecture of BIBREF10 to afford a fixed-size representation for the encoder output, we found that we can significantly improve zero-shot performance at the cost of a lower performance on the supervised language pairs. To the best of our knowledge, this is the first empirical evidence showing that the multilingual model can capture both language-independent and language-dependent features, and that the former can be prioritized during training.
This observation leads us to the most important contribution in this work, which is to propose several techniques to learn a joint semantic space for different languages in multilingual models without any architectural modification. The key idea is to prefer a source language-independent representation in the decoder using an additional loss function. As a result, the NMT architecture remains untouched and the technique is scalable to the number of languages in the training data. The success of this method is shown by significant gains on zero-shot translation quality in the standard IWSLT 2017 multilingual benchmark BIBREF11 . Finally, we introduce a more challenging scenario that involves more than one bridge language between source and target languages. This challenging setup confirms the consistency of our zero-shot techniques while clarifying the disadvantages of pivot-based translation.
By modifying the Transformer architecture of to afford a fixed-size representation for the encoder output, we found that we can significantly improve zero-shot performance at the cost of a lower performance on the supervised language pairs.
|
By modifying the transformer architecture to afford a fixed-size representation for the encoder output, they found that the authors can significantly improve zero-shot translation quality, what is the cost?
|
A lower performance on the supervised language pairs.
|
null | false
| 105
|
A natural way of testing a reader's comprehension of a text is to ask her a question the answer to which can be deduced from the text. Hence the task we are trying to solve consists of answering a cloze-style question, the answer to which depends on the understanding of a context document provided with the question. The model is also provided with a set of possible answers from which the correct one is to be selected. This can be formalized as follows:
The training data consist of tuples INLINEFORM0 , where INLINEFORM1 is a question, INLINEFORM2 is a document that contains the answer to question INLINEFORM3 , INLINEFORM4 is a set of possible answers and INLINEFORM5 is the ground-truth answer. Both INLINEFORM6 and INLINEFORM7 are sequences of words from vocabulary INLINEFORM8 . We also assume that all possible answers are words from the vocabulary, that is INLINEFORM9 . In the CBT and CNN/Daily Mail datasets it is also true that the ground-truth answer INLINEFORM10 appears in the document. This is exploited by many machine learning models BIBREF2 , BIBREF4 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF10 , BIBREF11 , BIBREF12 , however some do not explicitly depend on this property BIBREF1 , BIBREF3 , BIBREF5 , BIBREF9
We also assume that all possible answers are words from the vocabulary, that is A ⊆ W.
|
What is their experimental assumption?
|
All possible answers are words from the vocabulary.
|
null | false
| null |
Identify the first run broadcast network (ABC, NBC, or CBS) of the following TV shows:
Castle, Chicago Fire, CSI, ER, Grey's Anatomy, Scandal, Seinfeld, Magnum PI
|
In the US, Castle, Grey's Anatomy, and Scandal were first broadcast on ABC
Chicago Fire, ER and Seinfeld were first broadcast on NBC. CSI and Magnum PI were first broadcast on CBS, though Magnum PI switched to NBC for the 5th season of the rebooted series.
|
|
null | false
| null |
Tell me a romantic fact about animals.
|
Several animals mate for life, such as beavers, gibbons, beavers and bald eagles.
|
|
null | false
| null |
Who are the best starting MLB pitchers of all time?
|
There are many ways to argue who the best pitchers of all time are. Some of the criteria to consider are Cy Young Awards, ERA titles, strike out titles and records, post season success, and longevity. With these criteria heavily considered some of the best MLB pitchers of all time could be as follows.
5. Roger Clemens (The Rocket) pitched for 24 seasons and compiled accolades such as a record holding 7 Cy Young Awards, an MVP title, 5 strike out titles, 7 ERA titles, and won 2 World Series titles.
4. Pedro Martinez had the 4th highest win percentage in history, won the world series, won 3 Cy Young Awards, an MVP Award, and is one of only 4 pitchers to ever have more than 3,000 strike outs while walking fewer than 1,000.
3. Randy Johnson (The Big Unit) played for 22 MLB seasons and is the all time record holder for Wins, ranks 2nd all time for strike outs, won 5 Cy Young Awards, won The World Series.
2. Cy Young is one of the most legendary pitchers of all time and the annual award for the best pitcher in each MLB league is named after him. Some of his many accomplishments are that he is the all time leader for wins, innings pitched, and complete games.
1. Walter Johnson (The Big Train) was one of the "First Five" inaugurated into the MLB hall of fame. He is the all time leader in shut outs and was the all time leader for strikeouts for more than 55 years. He won The World Series and also is the all time leader for leading the league in strike outs 12 times.
|
|
null | false
| 391
|
The recent progress in MRC is largely due to the introduction of large-scale datasets. CNN/Daily Mail BIBREF0 and SQuAD BIBREF1 are two popular and widely-used datasets. More recently, other datasets using different collection methodologies have been introduced, such as MS MARCO BIBREF11 , NewsQA BIBREF12 and RACE BIBREF7 . For example, MS MARCO collects data from search engine queries and user-clicked results, thus contains a broader topic coverage than Wikipedia and news articles in SQuAD and CNN/Daily Mail. Among the large number of MRC datasets, RACE focuses primarily on developing MRC models with near-human capability. Questions in RACE come from real English exams designed specifically to test human comprehension. This makes RACE an appealing testbed for DFN; we will further illustrate this in Section " RACE - The MRC Task" .
The word “fusion” for MRC was first used by FusionNet BIBREF4 to refer to the process of updating the representation of passage (or question) using information from the question (or passage) representation. A typical way of fusion is through attention: for example, BiDAF BIBREF2 uses a bi-directional attention, where the representation of passage (or question) vectors are re-weighted by their similarities to the question (or passage) vectors. We will use “fusion” and “attention” interchangeably throughout the paper.
In the attention process of state-of-the-art MRC models, a pre-defined attention strategy is often applied. BIBREF13 proposed a Bi-directional Multi-Perspective Matching (BiMPM) model, which uses attention with multiple perspectives characterized by different parameters. Although multi-perspective attention might be able to handle different types of questions, all perspectives are used for all the questions. DFN is inspired by BiMPM, but our dynamic attention process is more adaptive to variations of questions.
Another important component of MRC systems is the answer module, which performs reasoning to generate the final prediction. The reasoning methods in existing literature can be grouped into three categories: 1) single-step reasoning BIBREF14 , BIBREF15 , BIBREF2 , BIBREF16 ; 2) multi-step reasoning with a fixed number of steps BIBREF17 , BIBREF18 , BIBREF19 ; and 3) dynamic multi-step reasoning (ReasoNet BIBREF6 ). In particular, BIBREF19 proposed handling the variations in passages and questions using Maxout units and iterative reasoning. However, this model still applies static attention and reasoning (with fixed multiple steps), where the same attention strategy is applied to all questions. DFN can be seen as an extension of ReasoNet, in the sense that the dynamic strategy is applied not only in the reasoning process but also in the attention process.
The idea of dynamic attention has been applied to article recommendations BIBREF20 . For MRC, Andreas et al. (2016) proposed a dynamic decision process for reading comprehension task BIBREF8 . In their dynamic neural module networks, the MRC task is divided into several predefined steps (e.g., finding, lookup, relating), and a neural network is dynamically composed via RL based on parsing information. In DFN, we also incorporate dynamic decisions, but instead of using fixed steps, we apply dynamic decisions to various attention strategies and flexible reasoning steps.
Another important component of MRC systems is the answer module, which performs reasoning to generate the final prediction. The reasoning methods in existing literature can be grouped into three categories: 1) single-step reasoning (Chen et al., 2016, 2017; Seo et al., 2016; Wang et al., 2017a); 2) multi-step reasoning with a fixed number of steps (Dhingra et al., 2016; Sordoni et al., 2016; Xiong et al., 2016); and 3) dynamic multistep reasoning (ReasoNet (Shen et al., 2017)).
|
What categories are the reasoning methods of MRC systems in existing literature can be grouped into?
|
1) single-step reasoning 2) multi-step reasoning with a fixed number of stepsand 3) dynamic multistep reasoning.
|
null | false
| null |
Write a letter to my boss that convinces them that I deserve a raise.
|
Hey boss, its me. We have worked hard over these past few periods, and while I look back fondly on our memories, I have an ask of you. You see, given how much I have sacrificed to deliver the results you have demanded over the past fiscal cycle, and we surpassed our goals, I was thinking perhaps you could show your appreciation with an reasonable increase in salary?
Thank you for your consideration, and I look forward to surpassing our goals together in the future.
|
|
null | false
| null |
Name all of the hosts from all versions of "The Price is Right."
|
Bob Barker, Drew Carey, Tom Kennedy, Dennis James
|
|
1910.09399
| false
| null |
black Deep learning shed some light to some of the most sophisticated advances in natural language representation, image synthesis BIBREF7, BIBREF8, BIBREF43, BIBREF35, and classification of generic data BIBREF44. However, a bulk of the latest breakthroughs in deep learning and computer vision were related to supervised learning BIBREF8. Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47. These subproblems are typically subdivided as focused research areas. DC-GAN's contributions are mainly driven by these two research areas. In order to generate plausible images from natural language, DC-GAN contributions revolve around developing a straightforward yet effective GAN architecture and training strategy that allows natural text to image synthesis. These contributions are primarily tested on the Caltech-UCSD Birds and Oxford-102 Flowers datasets. Each image in these datasets carry five text descriptions. These text descriptions were created by the research team when setting up the evaluation environment. The DC-GANs model is subsequently trained on several subcategories. Subcategories in this research represent the training and testing sub datasets. The performance shown by these experiments display a promising yet effective way to generate images from textual natural language descriptions BIBREF8.
Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis BIBREF45, BIBREF14, BIBREF8, BIBREF46, BIBREF47.
|
Is text-to-image synthesis trained is suppervized or unsuppervized manner?
|
The answers are shown as follows:
* Even though natural language and image synthesis were part of several contributions on the supervised side of deep learning, unsupervised learning saw recently a tremendous rise in input from the research community specially on two subproblems: text-based natural language and image synthesis
|
null | false
| null |
The concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker.
An electric field generally varies in space and its strength at any one point is defined as the force (per unit charge) that would be felt by a stationary, negligible charge if placed at that point. The conceptual charge, termed a 'test charge', must be vanishingly small to prevent its own electric field disturbing the main field and must also be stationary to prevent the effect of magnetic fields. As the electric field is defined in terms of force, and force is a vector, having both magnitude and direction, so it follows that an electric field is a vector field.
The study of electric fields created by stationary charges is called electrostatics. The field may be visualised by a set of imaginary lines whose direction at any point is the same as that of the field. This concept was introduced by Faraday, whose term 'lines of force' still sometimes sees use. The field lines are the paths that a point positive charge would seek to make as it was forced to move within the field; they are however an imaginary concept with no physical existence, and the field permeates all the intervening space between the lines. Field lines emanating from stationary charges have several key properties: first, that they originate at positive charges and terminate at negative charges; second, that they must enter any good conductor at right angles, and third, that they may never cross nor close in on themselves.
|
Based on the following passage what is the more dominant force in the universe?
|
Gravity is the dominant force at distance in the universe, despite being much weaker.
|
null | false
| null |
What is the supreme law of the land?
|
the Constitution
|
|
null | false
| 21
|
Text-based games -- in which an agent interacts with the world through textual natural language -- present us with the problem of combinatorially-sized action-spaces. Most current reinforcement learning algorithms are not capable of effectively handling such a large number of possible actions per turn. Poor sample efficiency, consequently, results in agents that are unable to pass bottleneck states, where they are unable to proceed because they do not see the right action sequence to pass the bottleneck enough times to be sufficiently reinforced. Building on prior work using knowledge graphs in reinforcement learning, we introduce two new game state exploration strategies. We compare our exploration strategies against strong baselines on the classic text-adventure game, Zork1, where prior agent have been unable to get past a bottleneck where the agent is eaten by a Grue.
We compare our exploration strategies against strong baselines on the classic text-adventure game, Zork1, where prior agent have been unable to get past a bottleneck where the agent is eaten by a Grue.
|
Which game has been involved in this study?
|
Zork1.
|
null | false
| null |
Summarize the opening of the Gettysburg Address in simple modern English.
"Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.
Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this."
|
87 years ago, we created this country so that people could be equal. We will see if a country like this can survive civil war. It's right that we remember people who died fighting for this country and are buried here.
|
|
2004.04315
| false
| null |
We collected COVID-19 related Arabic tweets from January 1, 2020 until April 15, 2020, using Twitter streaming API and the Tweepy Python library. We have collected more than 3,934,610 million tweets so far. In our dataset, we store the full tweet object including the id of the tweet, username, hashtags, and geolocation of the tweet. We created a list of the most common Arabic keywords associated with COVID-19. Using Twitter’s streaming API, we searched for any tweet containing the keyword(s) in the text of the tweet. Table TABREF1 shows the list of keywords used along with the starting date of tracking each keyword. Furthermore, Table TABREF2 shows the list of hashtags we have been tracking along with the number of tweets collected from each hashtag. Indeed, some tweets were irrelevant, and we kept only those that were relevant to the pandemic.
We have collected more than 3,934,610 million tweets so far.
|
How big is the dataset?
|
The answers are shown as follows:
* more than 3,934,610 million tweets
|
null | false
| null |
Who are the greatest players that have played for Manchester United?
|
- Sir Bobby Charlton
- Eric Cantona
- Paul Scholes
- Nemanja Vidic
- David Beckham
- Cristiano Ronaldo
- Ruud Van Nistelrooy
- Wayne Rooney
- Gary Neville
- Dennis Law
- George Best
- Peter Schmeichel
|
|
null | false
| 273
|
Nowadays, a large amount of information is produced and shared in unstructured form, mostly unstructured text BIBREF0 , BIBREF1 . This information can be exploited in decision making processes but, to be useful, it should be transformed and presented in ways that make its intrinsic knowledge more readily intelligible. For that, we need efficient methods and tools that quickly extract useful information from unstructured text collections. Such demand can be observed, for instance, in Biology, where researchers, in order to be abreast of all developments, need to analyse new biomedical literature on a daily basis BIBREF2 . Another application is on fraud and corruption studies where the network information — the set of actors and their relationships — is implicitly stored in unstructured natural-language documents BIBREF3 . Hence, text mining and information extraction are required to pre-process the texts in order to extract the entities and the relations between them.
Information extraction is a challenging task mainly due to the ambiguous features of natural-language. Moreover, most tools need to be adapted to different human languages and to different domains BIBREF4 . In fact, the language of the processed texts is still the decisive factor when choosing among existing information extraction technologies. This is also true for the task of entity extraction (Named Entity Recognition - NER).
For several reasons, text mining tools are typically first developed for English and only afterwards extended to other languages. Thus, there are still relatively few text mining tools for Portuguese and even less that are freely accessible. In particular, for the named entities recognition task in Portuguese texts, we find three extractors available: Alchemy, Zemanta and Rembrandt BIBREF5 . We also find some studies where the measures ( INLINEFORM0 , INLINEFORM1 and INLINEFORM2 ) for those extractors are computed and compared BIBREF6 , but their comparative effectiveness remains domain and final purpose dependent.
In this work, we present PAMPO (PAttern Matching and POs tagging based algorithm for NER), a new method to automatically extract named entities from unstructured texts, applicable to the Portuguese language but potentially adaptable to other languages as well. The method relies on flexible pattern matching, part-of-speech tagging and lexical-based rules. All steps are implemented using free software and taking advantage of various existing packages.
The process has been developed using as case-study a specific book written in Portuguese, but it has since been used in other applications and successfully tested in different text collections. In this paper, we describe the evaluation procedures on independent textual collections, and produce a comparative study of PAMPO with other existing tools for NER.
The method relies on flexible pattern matching, part-of-speech tagging and lexical-based rules. All steps are implemented using free software and taking advantage of various existing packages.
|
What rules does the method rely on?
|
Flexible pattern matching, part-of-speech tagging and lexical-based rules.
|
null | false
| null |
In the series A Song of Ice and Fire, who is the founder of House Justman?
|
Benedict I Justman, known also as Benedict Rivers, Benedict the Bold, and Benedict the Just
|
|
null | false
| null |
Running a business is tough and there will always be problems in the market. Make the case to a skeptical colleague as to why private sector managers should devote attention and energy to being a steward of the institutions of capitalism. Please include citations for Jonathan Haidt.
|
To steward means to take care of, to guide, and it implies a sense of responsibility. I agree with
the notion that, in addition to stories about capitalism prevailing as either an extractor or savior, a third story can be written of “prosperity with problems” in which participants are grateful for
accomplishments yet pursuing improvements (Haidt, J.) Putting that together, then, who is responsible for writing this story? I believe that answer to be business leaders themselves -- or else be beholden to government regulation to steward the institution in the direction that benefits not just owners and shareholders, but all stakeholders including employees and the environment.
But why should business leaders steward the market? Private managers hold a moral obligation to
those relying on capitalism for work and health insurance to create sustainable jobs. As individuals count on business leaders to respond to our world’s most prevalent issues, business leaders also hold a social obligation to do so with care. This response also assigns the appropriate accountability to
whomever is responsible for perpetuating a system – if business leaders want to reap the rewards of
profit, they must assume responsibility for the very conditions which benefit them.
As alluded to above, private interests sometimes conflict horribly with what society and individuals need, so government regulators must be
stewards alongside business leaders. Additionally, business leaders can take a step away from their day-to-day and learn from those who have dedicated their lives to research.
|
|
null | false
| null |
How do I make great espresso?
|
Making great espresso requires a few elements. Firstly, you need to start with freshly roasted, high quality coffee that's ground and brewed immediately thereafter. You need a burr grinder that's able to consistently grind coffee finely, with minimal heat and retention. Finally, you'll need to brew your coffee in an espresso machine, ideally one that has good temperature stability, uniform dispersion of water and the ability to extract at 9 bar of pressure.
|
|
1803.02839
| false
| null |
We trained word embeddings and a uni-directional GRU connected to a dense layer end-to-end for text classification on a set of scraped tweets using cross-entropy as the loss function. End-to-end training was selected to impose as few heuristic constraints on the system as possible. Each tweet was tokenized using NLTK TweetTokenizer and classified as one of 10 potential accounts from which it may have originated. The accounts were chosen based on the distinct topics each is known to typically tweet about. Tokens that occurred fewer than 5 times were disregarded in the model. The model was trained on 22106 tweets over 10 epochs, while 5526 were reserved for validation and testing sets (2763 each). The network demonstrated an insensitivity to the initialization of the hidden state, so, for algebraic considerations, INLINEFORM0 was chosen for hidden dimension of INLINEFORM1 . A graph of the network is shown in Fig.( FIGREF13 ).
Each tweet was tokenized using NLTK TweetTokenizer and classified as one of 10 potential accounts from which it may have originated. The accounts were chosen based on the distinct topics each is known to typically tweet about.
|
What text classification task is considered?
|
To classify a text as belonging to one of the ten possible classes.
|
null | false
| null |
Write a recipe for making garlic mashed potatoes which goes through step by step.
|
1) Wash and peel the potatoes
2) Chop the potatoes into inch size pieces
3) Fill a pot with enough water to cover the potatoes
4) Bring the water to boil
5) Boil until potatoes are soft and fall apart when pressed with a fork
6) Drain potatoes
7) Add cream, butter and crushed garlic to the potatoes
8) Mash the potatoes until creamy, adding cream to reach desired textured
|
|
null | false
| null |
What factors are identified as contributing to growth and development in early childhood?
|
Many factors are identified as contributing to growth and development in early childhood, such as nutrition, parents’ behaviors, parenting, social and cultural practices, and environment. Nutrition is important for young children because a child's body needs the right amount of nutrients to build and repair itself. A balanced diet is rich in vitamins, minerals, protein, carbohydrates, and facts to support young children's brain and body development. Child abuse and neglect have long-term consequences. It can negatively affect a child's brain and cause psychological problems. Exposure to violence can harm a child's emotional, psychological, and even physical development. As a child care provider, I want to share this knowledge with parents through different means such as newsletters, posters, and flyers, and by sharing with them some of the recognized online resources.
|
|
null | false
| null |
1981 Iraqi embassy bombing in Beirut
Article
Talk
Read
Edit
View history
Tools
From Wikipedia, the free encyclopedia
(Redirected from 1981 Iraqi embassy bombing)
Iraqi embassy bombing in Beirut
Part of Lebanese Civil War and spillover of the Iran–Iraq War
Location Beirut, Lebanon
Coordinates 33°52′31″N 35°29′03″ECoordinates: 33°52′31″N 35°29′03″E
Date December 15, 1981; 41 years ago
Attack type Suicide car bomb
Deaths 61
Injured 100+
Perpetrator Al-Dawa
vte
Lebanese Civil War
vte
Iran–Iraq War
On December 15, 1981, the Iraqi Shi'a Islamist group al-Dawa carried out a suicide car bombing targeting the Iraqi embassy in Beirut, Lebanon. The explosion leveled the embassy and killed 61 people, including Iraq's ambassador to Lebanon, and injured at least 100 others.
The attack is considered by some to be the second modern suicide bombing, preceded by the 1927 Bath School bombings and presaged the 1983 United States embassy bombing and the 1983 Beirut barracks bombings.
Background
Emboldened by the success of the 1979 Iranian Revolution, the anti-Baathist Shi'a Islamist group al-Dawa, with financial and military assistance from the Islamic Republic of Iran, began to employ violence in its struggle against the Iraqi government. In 1979 and 1980, al-Dawa assassinated a number of "senior but low-profile" Baathist officials in Iraq. In response to a failed assassination attempt on Iraqi Deputy Prime Minister Tariq Aziz in April 1980 by al-Dawa, the Iraqi government launched a severe crackdown on the group, which included the execution of al-Dawa spiritual leader Mohammad Baqir al-Sadr.
The remaining al-Dawa leadership fled to Iran and the group became an "effective proxy" for the Iranian government against Iraq during the Iran–Iraq War, which broke out in September 1980.
Bombing
On December 15, 1981, a car filled with approximately 100 kilograms of explosives was driven into Iraq's embassy building in Beirut by a suicide bomber. The ensuing explosion devastated the embassy, killed 61 people, including the Iraq ambassador, Abdul Razzak Lafta, and injured more than 100 others. Balqis al-Rawi, the Iraqi wife of Syrian poet Nizar Qabbani, who worked for the embassy's cultural section, was also killed in the attack.
Impact
The attack, which presaged the 1983 United States embassy bombing and the 1983 Beirut barracks bombing, is considered by some to be the first modern suicide bombing, though others nominate earlier attacks.
|
How was the Iraqi embassy in Beirut destroyed?
|
On December 15, 1981, a car filled with approximately 100 kilograms of explosives was driven into Iraq's embassy building in Beirut by a suicide bomber. The ensuing explosion devastated the embassy, killed 61 people, including the Iraq ambassador, Abdul Razzak Lafta, and injured more than 100 others.
|
null | false
| null |
The XYZ Affair was a political and diplomatic episode in 1797 and 1798, early in the presidency of John Adams, involving a confrontation between the United States and Republican France that led to the Quasi-War. The name derives from the substitution of the letters X, Y, and Z for the names of French diplomats Jean-Conrad Hottinguer (X), Pierre Bellamy (Y), and Lucien Hauteval (Z) in documents released by the Adams administration.
|
What was the "X.Y.Z. fever"?
|
The "X.Y.Z. fevor" is a reference to a diplomatic incident that occurred during the administration of the second President of the United Status, John Adams. Adams sent a diplomatic delegation to France to negotiate a resolution of issues that were threatening to lead to war. The French foreign minister, Charles-Maurice de Talleyrand-Périgord (Talleyrand) demanded a bribe and a loan to begin the negotiations. President Adams refused Tallyrand's request and reported the solicitation to Congress. The Senate printed President Adam's report which referred to the three French diplomats involved, Jean-Conrad Hottinguer, Pierre Bellamy, and Lucien Hauteval as "X,Y, and Z" respectively.
The populace supported Adams decision to refuse Tallyrand's bribe request. Vice-President Thomas Jefferson referred to the outpouring of support for Adam's political party that followed as "X.Y.Z. fevor".
|
null | false
| null |
Who is Daniel Ricciardo?
|
Daniel Joseph Ricciardo is an Italian-Australian racing driver who last raced in Formula One driving for McLaren, under the Australian flag. He is currently the third driver for Redbull F1 team. He is considered to be one of the best overtakers in F1 history.
|
|
null | false
| null |
Biryani is a mixed rice dish originating among the Muslims of the Indian subcontinent. It is made with Indian spices, rice, and usually some type of meat (chicken, beef, goat, lamb, prawn, and fish), or in some cases without any meat, and sometimes, in addition, eggs and potatoes.
Biryani is one of the most popular dishes in South Asia, as well as among the diaspora from the region. Similar dishes are also prepared in other parts of the world such as in Iraq, Myanmar, Thailand, and Malaysia. Biryani is the single most-ordered dish on Indian online food ordering and delivery services and has been labeled as the most popular dish overall in India.
|
What is Biryani?
|
The term biryani comes from the Farsi phrase birinj biriyan, “fried rice.” Rice is fried separately until about half-cooked, usually in oil or ghee, and then placed in a pot along with marinated meat or vegetables and spices. Biryani was first recorded in India in the Deccan region of the south, suggesting that it was brought there by merchants via a sea route. Biryani has since spread to all parts of the Indian subcontinent, with numerous local and regional variations. Varieties of biryani are also found in Sri Lanka, Malaysia, Thailand, and Oman.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.