paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Fanny Patton was a stern-wheel driven steamboat that operated on the Willamette River, in Oregon, starting in August 1865. This steamer operated from 1865 to 1880 for various owners, and was a considered a profitable vessel. The steamer was named for the daughter of businessman Edwin N Cook, Frances Mary "Fannie" Cooke (1837–1886). Edwin N. Cook was one of the principals of the People's Transportation Company
|
What was the Fannie Patton steamboat
|
The Fannie Patton was a steamboat operated in 1865. It was named after the daughter of businessman Edwin Cook and Frances Mary "Fannie" Cooke. It was operated on the Willamette River in Oregon
|
null | false
| 34
|
Since we focus on single-relation question answering in this work, our model decodes every question into a KB query that consists of exactly two elements–the topic entity, and the predicate. More formally, our model is a function $f(q, \lbrace e\rbrace , \lbrace p\rbrace )$ that takes as input a question $q$ , a set of candidate entities $\lbrace e\rbrace =e_1, ...,e_n$ , a set of candidate predicates $\lbrace p\rbrace =p_1,..., p_m$ , and produces a likelihood score $p(e_i, p_j|q)$ of generating entity $e_i$ and predicate $p_j$ given question $q$ for all $i\in {1...n}, j\in {1...m}$ .
As illustrated in Figure 1, our model consists of three components:
The details of each component are described in the following sections.
We use back-propagation to learn all of the weights in our model.
|
How to learn the weights in the model?
|
They use back-propagation to learn all of the weights in their model.
|
null | false
| null |
What were Atari gaming consoles?
|
Gaming consoles from Atari:
- Atari 2600
- Atari 5200
- Atari 7800
- Atari Jaguar
- Atari Jaguar CD
- Atari Lynx
- Atari ST
- Atari XEGS
|
|
null | false
| null |
The reef manta ray (Mobula alfredi) is a species of ray in the family Mobulidae, one of the largest rays in the world. Among generally recognized species, it is the second-largest species of ray, only surpassed by the giant oceanic manta ray.
The species was described in 1868 by Gerard Krefft, the director of the Australian Museum. He named it M. alfredi in honor of Alfred, Duke of Edinburgh, the first member of the British royal family to visit Australia. It was originally described as part of the genus Manta but in 2017 was changed, along with the rest of the mantas, to be included as part of the genus Mobula.
Reef manta rays are typically 3 to 3.5 m (9.8 to 11.5 ft) in disc width, with a maximum size of about 5.5 m (18 ft). For a long time included in M. birostris, the status of the reef manta ray as a separate species was only confirmed in 2009. The reef manta ray is found widely in the tropical and subtropical Indo-Pacific, but with a few records from the tropical East Atlantic and none from the West Atlantic or East Pacific. Compared to the giant oceanic manta ray, the reef manta ray tends to be found in shallower, more coastal habitats, but local migrations are sometimes reported. Mobula birostris is similar in appearance to Mobula alfredi and the two species may be confused as their distribution overlaps. However, there are distinguishing features.
|
How big is Reef manta rays
|
Reef manta rays are typically 3 to 3.5 m (9.8 to 11.5 ft) in disc width, with a maximum size of about 5.5 m (18 ft).
|
null | false
| null |
The Luftwaffe was the aerial-warfare branch of the German Wehrmacht before and during World War II. Germany's military air arms during World War I, the Luftstreitkräfte of the Imperial Army and the Marine-Fliegerabteilung of the Imperial Navy, had been disbanded in May 1920 in accordance with the terms of the 1919 Treaty of Versailles which banned Germany from having any air force.
During the interwar period, German pilots were trained secretly in violation of the treaty at Lipetsk Air Base in the Soviet Union. With the rise of the Nazi Party and the repudiation of the Versailles Treaty, the Luftwaffe's existence was publicly acknowledged on 26 February 1935, just over two weeks before open defiance of the Versailles Treaty through German rearmament and conscription would be announced on 16 March. The Condor Legion, a Luftwaffe detachment sent to aid Nationalist forces in the Spanish Civil War, provided the force with a valuable testing ground for new tactics and aircraft. Partially as a result of this combat experience, the Luftwaffe had become one of the most sophisticated, technologically advanced, and battle-experienced air forces in the world when World War II broke out in September 1939. By the summer of 1939, the Luftwaffe had twenty-eight Geschwader (wings). The Luftwaffe also operated a paratrooper force known as the Fallschirmjäger.
The Luftwaffe proved instrumental in the German victories across Poland and Western Europe in 1939 and 1940. During the Battle of Britain, however, despite inflicting severe damage to the RAF's infrastructure and, during the subsequent Blitz, devastating many British cities, the German air force failed to batter the beleaguered British into submission. From 1942, Allied bombing campaigns gradually destroyed the Luftwaffe's fighter arm. From late 1942, the Luftwaffe used its surplus ground support and other personnel to raise Luftwaffe Field Divisions. In addition to its service in the West, the Luftwaffe operated over the Soviet Union, North Africa and Southern Europe. Despite its belated use of advanced turbojet and rocket-propelled aircraft for the destruction of Allied bombers, the Luftwaffe was overwhelmed by the Allies' superior numbers and improved tactics, and a lack of trained pilots and aviation fuel. In January 1945, during the closing stages of the Battle of the Bulge, the Luftwaffe made a last-ditch effort to win air superiority, and met with failure. With rapidly dwindling supplies of petroleum, oil, and lubricants after this campaign, and as part of the entire combined Wehrmacht military forces as a whole, the Luftwaffe ceased to be an effective fighting force.
After the defeat of Nazi Germany, the Luftwaffe was disbanded in 1946. During World War II, German pilots claimed roughly 70,000 aerial victories, while over 75,000 Luftwaffe aircraft were destroyed or significantly damaged. Of these, nearly 40,000 were lost entirely. The Luftwaffe had only two commanders-in-chief throughout its history: Hermann Göring and later Generalfeldmarschall Robert Ritter von Greim for the last two weeks of the war.
|
Given the below reference text about Luftwaffe, tell me for how many years was this military organisation active officially in Germany?
|
The Luftwaffe was officially active for about 11 years - 1935 to 1946.
|
1603.07044
| false
| null |
Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C). One might think that classic retrieval models like language models for information retrieval BIBREF0 could solve these tasks. However, a big challenge for cQA tasks is that users are used to expressing similar meanings with different words, which creates gaps when matching questions based on common words. Other challenges include informal usage of language, highly diverse content of comments, and variation in the length of both questions and comments.
In our cQA tasks, the pair of objects are (question, question) or (question, comment), and the relationship is relevant/irrelevant. The left side of Figure 1 shows one intuitive way to predict relationships using RNNs. Parallel LSTMs encode two objects independently, and then concatenate their outputs as an input to a feed-forward neural network (FNN) with a softmax output layer for classification.
For task C, in addition to an original question (oriQ) and an external comment (relC), the question which relC commented on is also given (relQ). To incorporate this extra information, we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs (oriQ/relQ, oriQ/relC, relQ/relC).
Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C).
In our cQA tasks, the pair of objects are (question, question) or (question, comment), and the relationship is relevant/irrelevant.
For task C, in addition to an original question (oriQ) and an external comment (relC), the question which relC commented on is also given (relQ). To incorporate this extra information, we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs (oriQ/relQ, oriQ/relC, relQ/relC).
|
What supplemental tasks are used for multitask learning?
|
Multitask learning is used for the task of predicting relevance of a comment on a different question to a given question, where the supplemental tasks are predicting relevance between the questions, and between the comment and the corresponding question
|
null | false
| null |
Why do many of the flags of African countries have similar colors?
|
Ethiopia was the only sub-Saharan African country not to be colonized. Since the 19th century its flag has the colors green, yellow, and red. When Ghana became independent, it was inspired by Ethiopia’s resistance to colonization, and took those colors for its own flag. Subsequent African countries used the same colors, as a sign of unity with the other African countries.
|
|
1810.00663
| false
| null |
While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort.
While the dataset was collected with simulated environments, no structure was imposed on the navigation instructions while crowd-sourcing data. Thus, many instructions in our dataset are ambiguous. Moreover, the order of the behaviors in the instructions is not always the same. For instance, a person said “turn right and advance” to describe part of a route, while another person said “go straight after turning right” in a similar situation. The high variability present in the natural language descriptions of our dataset makes the problem of decoding instructions into behaviors not trivial. See Appendix A of the supplementary material for additional details on our data collection effort.
|
What language is the experiment done in?
|
english language
|
null | false
| null |
Is Pluto a planet ?
|
Pluto's status is the subject of extensive debate among planetary scientists. Upon its discovery, Pluto was initially classified as the ninth planet of the Solar system. However in 2006, the International Astronomical Union (IAU) issued a definition for planets in the Solar system that resulted in the reclassification of Pluto as a "dwarf planet".
According to this definition, a planet is any celestial body that :
- orbits the Sun,
- has enough mass to assume a hydrostatic equilibrium (nearly round) shape,
- cleared away any other similarly-sized objects near its orbit around the Sun.
The last criteria hasn't been met by Pluto.
|
|
null | false
| null |
What is the difference between baking soda and baking powder?
|
Baking soda is the common name for a chemical compound called sodium bicarbonate. It usually comes in the form of a white fine powder. It's a versatile product that can be used for baking and cleaning. On the other hand, baking powder is made of baking soda plus cream of tartar and cornstarch. It's typically used for baking and helps your goodies come out light and airy.
|
|
null | false
| null |
The Hatfield–McCoy feud, also described by journalists as the Hatfield–McCoy conflict, involved two rural American families of the West Virginia–Kentucky area along the Tug Fork of the Big Sandy River in the years 1863–1891. The Hatfields of West Virginia were led by William Anderson "Devil Anse" Hatfield, while the McCoys of Kentucky were under the leadership of Randolph "Ole Ran'l" McCoy. Those involved in the feud were descended from Joseph Hatfield and William McCoy (born c. 1750). The feud has entered the American folklore lexicon as a metonym for any bitterly feuding rival parties.
|
From the following description of the Hatfield-McCoy feud, identify the person condidered the leader of the Hatfields.
|
William Anderson Hatfield, also known as "Devil Anse", was considered the leader of the Hatfield clan.
|
null | false
| null |
London has an extensive and developed transport network which includes both private and public services. Journeys made by public transport systems account for 37% of London's journeys while private services accounted for 36% of journeys, walking 24% and cycling 2%. London's public transport network serves as the central hub for the United Kingdom in rail, air and road transport.
|
How many percentages of people in London prefer to walk ?
|
Approx. 24% people in London prefer to walk.
|
null | false
| null |
Can you write me a page long essay on whether capitalism needs to be reexamined and amended? Is such a reexamination one of incremental or transformational change? If so, why? If not, why not?
|
How can I fully participate in a capitalistic society without examining it in the first place? That is
the necessary first step to then “re-examine” capitalism, which I unequivocally support. Blindly
following a belief, even an economic one, without critically assessing it can run the danger of
making me complicit in contributing to the consequences of that belief even without any intent.
I can only fully speak from my experience, so that is the lens through which I approach this
class; on the larger scale, however, a re-examination is also called for to discern whether
capitalism is the best course of action in the long term.
Upon this re-examination, I conclude that capitalism is not broken: it is functioning as intended,
to create economic value for the majority groups who popularized it.
Capitalism in the United States has achieved such fast-paced progress because Black slaves, minority immigrants, and unpaid domestic laborers have assumed the lion’s share of building the
system, and the legacies of these systems result in the inequities of race, class, and gender as
discussed in class. Such inequities are not only shameful, but plainly unnecessary to continue making such strides in industry, health, longevity, and sharing of ideas. Therefore, capitalism should be amended so that workers more equitably reap the fruits of their labor. Exorbitant CEO pay and the
distribution of the majority of profits towards shareholders; global factories which exploit
children while imposing unclean and unsafe working conditions on many; unfettered
production decimating the planet – should all be amended. Capitalism has grown automation,
technology, and AI as well as the development and sharing of thought to remarkable levels. Such progress can be leveraged to address these issues and more. This isn’t idealistic or wishful thinking; though economic situations can seem like “given”
circumstances, they are not to be taken for granted; capitalism is a human institution which
requires human participation to uphold them. Therefore, capitalism also can be amended.
Introducing another consideration is not a limitation on profits; rather, considering additional
parameters inspires innovation, as embraced by many leading organizations.
But at what pace shall capitalism be amended? The modern United States has necessary checks
and balances in the legal systems to uphold the laws and enforce the regulations, which makes
me believe that change should be incremental as to learn from our past and carefully curate a
new future. It takes time to engage the partners vital to change – academics, politicians, social
leaders, business leaders – and transformative change may leave some key stakeholders
behind. Over time, incremental change is transformative and can achieve the changes imagined above.
|
|
null | false
| null |
Pandora is a female Mexican singing trio. The trio was formed in 1981 under the name Trebol by sisters Isabel Lascurain and Mayte Lascurain and their cousin Fernanda Meade. The trio was renamed "Pandora" upon signing with EMI Records in 1984.
|
Given this paragraph about Pandora, what was the initial group name of Pandora musical group?
|
The original name of Pandora musical group was Trebol.
|
null | false
| 477
|
We begin with bridging logical expressions and graph patterns based on two intuitions/assumptions:
• A relation can often be inferred from other relations with a combination of simple logical rules that do not involve too many nodes. • One can predict relation existence by finding whether the subgraphs containing the pair are similar to the subgraphs containing an edge with the same relation.
Take Figure as an example where we wish to predict whether Person E lives in London.
An example of the first intuition goes as follows: if Person E watches the soccer matches with club Arsenal, which is based in London, then Person E may also live in London. However, such logical rules (so called chain-like logic rules in) may not always hold, requiring probabilistic inference. For instance, Person G watches the matches with FC Barcelona, which is Based in Barcelona, but he does not live in the same city. To make more accurate predictions, we need to combine other possible logical rules, such as "Friends often live in the same city" -in this case, Person E and Person H are friends, and Person H lives in London (some of them construct certain patterns are called tree-like logic rules or graph-like rules).
The second intuition tells us that we can predict whether Person E lives in London by checking if the subgraphs containing the pair (Person E, London) (named target subgraphs, e.g. Person E -Arsenal -London) are similar to the subgraphs containing a live relation (named supporting subgraphs, e.g. Person I -Chelsea -London). Similarity computation is done by a neural network. To determine whether the two sets of subgraphs are similar, we additionally compare the target subgraphs against another set of subgraphs that do not contain the relation (named refuting subgraphs, e.g. Person E -Person G -Barcelona) as a baseline. A variety of options exist for selecting the refuting subgraphs, ranging from selecting those having the same topology regardless of the actual relation types, to those having both the same shape and relation types.
In addition, we do not include the edge of live in the supporting subgraphs to avoid information leakage. We also require the supporting subgraphs and refuting subgraphs to have the same shape as that of the target subgraphs in order to make similarity computation focus more on the node features and relation types rather than the topology, and that is why we call both supporting subgraphs and refuting subgraphs analogy subgraphs. This enables us to predict live relation with the pipeline above without learning an explicit embedding for live relation, unlike prior works. The reason is that the information of live relation can be implicitly expressed by the other relations found in the supporting and refuting subgraphs, e.g. live can be expressed by watch and based. This serves as the basis for our model's generalizability to relations that have no occurrence in the training set.
We can describe what kind of subgraphs we are looking for in the above example with logical expressions. The target subgraphs above will have the logical expression Source(x) ∧ Target(y) ∧ Edge(x, z) ∧ Edge(z, y), where Source(x) means if node x is the source node s, Target(y) means if node y is the target node t, and Edge(x, z) means if there exists an edge regardless of relation type between x and z. Our task is to determine whether an edge of relation type r exists between s and t. The supporting subgraphs will have Edge(x, z) ∧ Edge(z, y) ∧ Edge r (x, y), meaning that the subgraph can be any 3-cycle except that there must be an edge with relation type r. The refuting subgraphs will have Edge(x, z) ∧ Edge(z, y) ∧ ¬Edge r (x, y), meaning that it can be any 2-path except that the starting node and ending node must not have an edge with relation r in between. This leads to the concept of graph patterns, defined as follows:
Definition 1 (Graph Pattern). A graph pattern is a logical function that takes in a subgraph as input and returns a boolean value, consisting of logical operators (¬, ∧, ∨) as well as indicator operators determined by the existence of nodes and edges, the latter includes:
• Source(x) and Target(x), returning true iff x is the source node and target node, respectively.
• Edge(x, y), returning true iff there exists an edge between node x and y.
• Edge r (x, y), returning true iff there exists an edge of relation r between node x and y.
We call a subgraph S matches a graph pattern Π if true is returned when S is applied to Π. Such operation is already well supported in graph databases, and in the following sections we give more efficient algorithms that perform matching with simple patterns.
Take Figure 1 as an example where we wish to predict whether Person E lives in London. An example of the first intuition goes as follows: if Person E watches the soccer matches with club Arsenal, which is based in London, then Person E may also live in London. However, such logical rules (so called chain-like logic rules in (Yang et al., 2017)) may not always hold, requiring probabilistic inference. For instance, Person G watches the matches with FC Barcelona, which is Based in Barcelona, but he does not live in the same city. To make more accurate predictions, we need to combine other possible logical rules, such as “Friends often live in the same city” - in this case, Person E and Person H are friends, and Person H lives in London (some of them construct certain patterns are called tree-like logic rules (Yang & Song, 2019) or graph-like rules (Shi et al., 2020)).
|
Can they also provide reference to some relevant works that have explicitly identified such patterns?
|
We added some references about using chain [3], tree [4], and graph [5] structures to represent the logical rules. We have updated Section 2 in the revised PDF.
[3] Differentiable Learning of Logical Rules for Knowledge Base Reasoning. Yang et al., NIPS 2017.
[4] Learn to Explain Efficiently via Neural Logic Inductive Learning. Yang et al., ICLR 2020.
[5] Differentiable Learning of Graph-like Logical Rules from Knowledge Graphs. Shi et al., arXiv 2021.
|
null | false
| 224
|
Recently, there is a surge of excitement in adding numerous new domains to conversational agents such as Alexa, Google Assistant, Cortana and Siri to support a myriad of use cases. However, building a slot tagger, which is a key component for natural language understanding (NLU) BIBREF0 , for a new domain requires massive amounts of labeled data, hindering rapid development of new skills. To address the data-intensiveness problem, domain adaptation approaches have been successfully applied. Previous approaches are roughly categorized into two groups: data-driven approaches BIBREF1 , BIBREF2 and model-driven approaches BIBREF3 , BIBREF4 .
In the data-driven approach, new target models are trained by combining target domain data with relevant data from a repository of arbitrary labeled datasets using domain adaptation approaches such as feature augmentation BIBREF1 . A disadvantage of this approach is the increase in training time as the amount of reusable data grows. The reusable data might contain hundreds of thousands of samples, making iterative refinement prohibitive. In contrast, the model-driven approach utilizes “expert" models for summarizing the data for reusable slots BIBREF3 , BIBREF4 . The outputs of the expert models are directly used when training new domains, allowing for faster training. A drawback of this approach is that it requires explicit concept alignments which itself is not a trivial task, potentially missing lots of reusable concepts. Additionally, it's not easy to generalize these models to new, unseen slots.
In this paper, we present a new domain adaptation technique for slot tagging inspired by recent advances in zero-shot learning. Traditionally, slot tagging is formulated as a sequence labeling task using the BIO representation (Figure 1 ). Our approach formulates this problem as detecting spans that contain values for each slot as shown in Figure 1 . For implicit transfer of reusable concepts across domains, we represent slots in a shared latent semantic space by embedding the slot description. With the shared latent space, domain adaptation can simply be done by fine-tuning a base model, which is trained on massive data, with a handful of target domain data without any explicit concept alignments. A similar idea of utilizing zero-shot learning for slot tagging has been proven to work in semi-supervised settings BIBREF5 . Our zero-shot model architecture differs from this by adding: 1) an attention layer to produce the slot-aware representations of input words, 2) a CRF layer to better satisfy global consistency constraints, 3) character-level embeddings to incorporate morphological information. Despite its simplicity, we show that our model outperforms all existing methods including the previous zero-shot learning approach in domain adaptation settings.
We first describe our approach called Zero-Shot Adaptive Transfer model (ZAT) in detail. We then describe the dataset we used for our experiments. Using this data, we conduct experiments comparing our ZAT model with a set of state-of-the-art models: Bag-of-Expert (BoE) models and their non-expert counterparts BIBREF4 , and the Concept Tagger model BIBREF5 , showing that our model can lead to significant F1-score improvements. This is followed by an in-depth analysis of the results. We then provide a survey of related work and concluding remarks.
In this paper, we present a new domain adaptation technique for slot tagging inspired by recent advances in zeroshot learning.
|
What is their technique inspired by?
|
Recent advances in zero-shot learning.
|
null | false
| null |
UBS Group AG[nb 1] is a multinational investment bank and financial services company founded and based in Switzerland. Co-headquartered in the cities of Zürich and Basel, it maintains a presence in all major financial centres as the largest Swiss banking institution and the largest private bank in the world. UBS client services are known for their strict bank–client confidentiality and culture of banking secrecy.[nb 2] Because of the bank's large positions in the Americas, EMEA, and Asia Pacific markets, the Financial Stability Board considers it a global systemically important bank.
Apart from private banking, UBS provides wealth management, asset management, and investment banking services for private, corporate, and institutional clients with international service. UBS manages the largest amount of private wealth in the world, counting approximately half of the world's billionaires among its clients. Despite its trimming of sell-side operations, UBS maintains a global investment bank and is considered a primary market maker. The bank also maintains numerous underground bank vaults, bunkers, and storage facilities for gold bars around the Swiss Alps and internationally. Partly due to its banking secrecy, it has been at the centre of numerous tax avoidance investigations undertaken by U.S., French, German, Israeli, and Belgian authorities. UBS operations in Switzerland and the United States were respectively ranked first and second on the 2018 Financial Secrecy Index.
As of May 2022, UBS is the third largest bank in Europe with a market capitalization of $63 billion. It is one of the eight global "Bulge Bracket" banks. It has over CHF 3.2 trillion in assets under management (AUM), approximately CHF 2.8 trillion of which are invested assets. In June 2017, its return on invested capital was 11.1%, followed by Goldman Sachs' 9.35%, and JPMorgan Chase's 9.456%. In late 2016, UBS established a blockchain technology research lab in London to advance its cyber security and encryption of client activities. Based on regional deal flow and political influence, UBS is considered one of the "biggest, most powerful financial institutions in the world". The company's capital strength, security protocols, and reputation for discretion have yielded a substantial market share in banking and a high level of brand loyalty. Alternatively, it receives routine criticism for facilitating tax noncompliance and off-shore financing. UBS is a primary dealer and Forex counterparty of the U.S. Federal Reserve.
|
Is UBS a good bank
|
UBS Group AG is a multinational investment bank and financial services company founded and based in Switzerland. Co-headquartered in the cities of Zürich and Basel, it maintains a presence in all major financial centres as the largest Swiss banking institution and the largest private bank in the world. UBS client services are known for their strict bank–client confidentiality and culture of banking secrecy. Because of the bank's large positions in the Americas, EMEA, and Asia Pacific markets, the Financial Stability Board considers it a global systemically important bank.
Apart from private banking, UBS provides wealth management, asset management, and investment banking services for private, corporate, and institutional clients with international service. UBS manages the largest amount of private wealth in the world, counting approximately half of the world's billionaires among its clients. Despite its trimming of sell-side operations, UBS maintains a global investment bank and is considered a primary market maker. The bank also maintains numerous underground bank vaults, bunkers, and storage facilities for gold bars around the Swiss Alps and internationally. Partly due to its banking secrecy, it has been at the centre of numerous tax avoidance investigations undertaken by U.S., French, German, Israeli, and Belgian authorities. UBS operations in Switzerland and the United States were respectively ranked first and second on the 2018 Financial Secrecy Index.
As of May 2022, UBS is the third largest bank in Europe with a market capitalization of $63 billion. It is one of the eight global "Bulge Bracket" banks. It has over CHF 3.2 trillion in assets under management (AUM), approximately CHF 2.8 trillion of which are invested assets. In June 2017, its return on invested capital was 11.1%, followed by Goldman Sachs' 9.35%, and JPMorgan Chase's 9.456%. In late 2016, UBS established a blockchain technology research lab in London to advance its cyber security and encryption of client activities. Based on regional deal flow and political influence, UBS is considered one of the "biggest, most powerful financial institutions in the world". The company's capital strength, security protocols, and reputation for discretion have yielded a substantial market share in banking and a high level of brand loyalty. Alternatively, it receives routine criticism for facilitating tax noncompliance and off-shore financing. UBS is a primary dealer and Forex counterparty of the U.S. Federal Reserve.
|
null | false
| null |
What is slack key guitar?
|
Slack key guitar is a style of finger-style guitar played in open tuning that originated in 19th century Hawaii. The style was created when Portuguese cowboys brought the guitar to Hawaii. The Hawaiians chose to retune the guitar by slackening some of the strings often to an open G chord. The most popular tuning DGDGBD is called "taro patch". Families often had their own tunings which were held as closely guarded secrets.
|
|
null | false
| 249
|
To operationalise the task of automatic cyberbullying detection, we developed and tested a fine-grained annotation scheme and applied it to our corpora. While a detailed overview of the guidelines is presented in our technical report BIBREF30 , we briefly present the categories and main annotation steps below.
Threat/Blackmail: expressions containing physical or psychological threats or indications of blackmail.
Insult: expressions meant to hurt or offend the victim.
General insult: general expressions containing abusive, degrading or offensive language that are meant to insult the addressee.
Attacking relatives: insulting expressions towards relatives or friends of the victim.
Discrimination: expressions of unjust or prejudicial treatment of the victim. Two types of discrimination are distinguished (i.e., sexism and racism). Other forms of discrimination should be categorised as general insults.
Curse/Exclusion: expressions of a wish that some form of adversity or misfortune will befall the victim and expressions that exclude the victim from a conversation or a social group.
Defamation: expressions that reveal confident or defamatory information about the victim to a large public.
Sexual Talk: expressions with a sexual meaning or connotation. A distinction is made between innocent sexual talk and sexual harassment.
Defense: expressions in support of the victim, expressed by the victim himself or by a bystander.
Bystander defense: expressions by which a bystander shows support for the victim or discourages the harasser from continuing his actions.
Victim defense: assertive or powerless reactions from the victim.
Encouragement to the harasser: expressions in support of the harasser.
Other: expressions that contain any other form of cyberbullying-related behaviour than the ones described here.
Based on the literature on role-allocation in cyberbullying episodes BIBREF51 , BIBREF50 , four roles are distinguished, including victim, bully, and two types of bystanders.
Harasser or Bully: person who initiates the bullying.
Victim: person who is harassed.
Bystander-defender: person who helps the victim and discourages the harasser from continuing his actions.
Bystander-assistant: person who does not initiate, but helps or encourages the harasser.
Essentially, the annotation scheme describes two levels of annotation. Firstly, the annotators were asked to indicate, at the post level, whether the post under investigation was related to cyberbullying. If the post was considered a signal of cyberbullying, annotators identified the author's role. Secondly, at the subsentence level, the annotators were tasked with the identification of a number of fine-grained text categories related to cyberbullying. More concretely, they identified all text spans corresponding to one of the categories described in the annotation scheme. To provide the annotators with some context, all posts were presented within their original conversation when possible. All annotations were done using the Brat rapid annotation tool BIBREF52 , some examples of which are presented in Table TABREF33 .
font=footnotesize,sc,justification=centering,labelsep=period
Firstly, the annotators were asked to indicate, at the post level, whether the post under investigation was related to cyberbullying. If the post was considered a signal of cyberbullying, annotators identified the author’s role. Secondly, at the subsentence level
|
What are the two levels of annotation
|
The indicate and subsentence level.
|
null | false
| 298
|
At a high level, as shown in Figure FIGREF4 , a product search engine works as follows: a customer issues a query, which is passed to a lexical matching engine (typically an inverted index BIBREF0 , BIBREF1 ) to retrieve all products that contain words in the query, producing a match set. The match set passes through stages of ranking, wherein top results from the previous stage are re-ranked before the most relevant items are finally displayed. It is imperative that the match set contain a relevant and diverse set of products that match the customer intent in order for the subsequent rankers to succeed. However, inverted index-based lexical matching falls short in several key aspects:
In this paper, we address the question: Given rich customer behavior data, can we train a deep learning model to retrieve matching products in response to a query? Intuitively, there is reason to believe that customer behavior logs contain semantic information; customers who are intent on purchasing a product circumvent the limitations of lexical matching by query reformulation or by deeper exploration of the search results. The challenge is the sheer magnitude of the data as well as the presence of noise, a challenge that modern deep learning techniques address very effectively.
Product search is different from web search as the queries tend to be shorter and the positive signals (purchases) are sparser than clicks. Models based on conversion rates or click-through-rates may incorrectly favor accessories (like a phone cover) over the main product (like a cell phone). This is further complicated by shoppers maintaining multiple intents during a single search session: a customer may be looking for a specific television model while also looking for accessories for this item at the lowest price and browsing additional products to qualify for free shipping. A product search engine should reduce the effort needed from a customer with a specific mission (narrow queries) while allowing shoppers to explore when they are looking for inspiration (broad queries).
As mentioned, product search typically operates in two stages: matching and ranking. Products that contain words in the query ( INLINEFORM0 ) are the primary candidates. Products that have prior behavioral associations (products bought or clicked after issuing a query INLINEFORM1 ) are also included in the candidate set. The ranking step takes these candidates and orders them using a machine-learned rank function to optimize for customer satisfaction and business metrics.
We present a neural network trained with large amounts of purchase and click signals to complement a lexical search engine in ad hoc product retrieval. Our first contribution is a loss function with a built-in threshold to differentiate between random negative, impressed but not purchased, and purchased items. Our second contribution is the empirical result that recommends average pooling in combination with INLINEFORM0 -grams that capture short-range linguistic patterns instead of more complex architectures. Third, we show the effectiveness of consistent token hashing in Siamese networks for zero-shot learning and handling out of vocabulary tokens.
In Section SECREF2 , we highlight related work. In Section SECREF3 , we describe our model architecture, loss functions, and tokenization techniques including our approach for unseen words. We then introduce the readers to the data and our input representations for queries and products in Section SECREF4 . Section SECREF5 presents the evaluation metrics and our results. We provide implementation details and optimizations to efficiently train the model with large amounts of data in Section SECREF6 . Finally, we conclude in Section SECREF7 with a discussion of future work.
We present a neural network trained with large amounts of purchase and click signals to complement a lexical search engine in ad hoc product retrieval.
|
What kind of neural network?
|
A neural network trained with large amounts of purchase and click signals.
|
null | false
| null |
On a gravel bike ride, categorize which of these items are 'useful', 'not useful', 'dangerous', or ,'neither': shovel, spare tire, clock, horn, water bottle, gps, knife, snake
|
shovel = not useful
spare tire = useful
clock = neither
horn = useful
water bottle = useful
gps = useful
knife = useful
snake = dangerous
|
|
1911.11744
| false
| null |
We evaluate our model in a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command. Each environment contains between three and five objects differentiated by their size (small, large), shape (round, square) and color (red, green, blue, yellow, pink), totalling in 20 different objects. Depending on the generated scenario, combinations of these three features are necessary to distinguish the targets from each other, allowing for tasks of varying complexity.
We evaluate our model in a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command. Each environment contains between three and five objects differentiated by their size (small, large), shape (round, square) and color (red, green, blue, yellow, pink), totalling in 20 different objects. Depending on the generated scenario, combinations of these three features are necessary to distinguish the targets from each other, allowing for tasks of varying complexity.
|
What simulations are performed by the authors to validate their approach?
|
The answers are shown as follows:
* a simulated binning task in which the robot is tasked to place a cube into a bowl as outlined by the verbal command
|
null | false
| 310
|
In Table TABREF21 , we show the ablation study of the features used in our model on English-German, German-English, and English-Czech. For each language pair, we show the performance of CEQE without adding the corresponding components specified in the second column respectively. The last row shows the performance of the complete CEQE with all the components. As the baseline features released in the WMT2018 QE Shared Task for English-Latvian are incomplete, we train our CEQE model without using such features. We can glean several observations from this data:
Because the number of “OK” tags is much larger than the number of “BAD” tags, the model is easily biased towards predicting the “OK” tag for each target word. The F1-OK scores are higher than the F1-BAD scores across all the language pairs.
For German-English, English Czech, and English-German (SMT), adding the baseline features can significantly improve the F1-BAD scores.
For English-Czech, English-German (SMT), and English-German (NMT), removing POS tags makes the model more biased towards predicting “OK” tags, which leads to higher F1-OK scores and lower F1-BAD scores.
Adding the convolution layer helps to boost the performance of F1-Multi, especially on English-Czech and English-Germen (SMT) tasks. Comparing the F1-OK scores of the model with and without the convolution layer, we find that adding the convolution layer help to boost the F1-OK scores when translating from English to other languages, i.e., English-Czech, English-German (SMT and NMT). We conjecture that the convolution layer can capture the local information more effectively from the aligned source words in English.
Adding the convolution layer helps to boost the performance of F1-Multi, especially on English-Czech and English-Germen (SMT) tasks. Comparing the F1-OK scores of the model with and without the convolution layer, we find that adding the convolution layer help to boost the F1-OK scores when translating from English to other languages, i.e., English-Czech, English-German (SMT and NMT).
|
Whether adding the convolution layer help to boost the F1-OK scores when translating from English to other languages?
|
Yes.
|
null | false
| null |
What was revolutionary about the Barbie doll when it was first sold?
|
The Barbie doll was one of the first dolls for children that was an adult, rather than the traditional baby doll
|
|
null | false
| 161
|
Emotion detection has long been a topic of interest to scholars in natural language processing (NLP) domain. Researchers aim to recognize the emotion behind the text and distribute similar ones into the same group. Establishing an emotion classifier can not only understand each user's feeling but also be extended to various application, for example, the motivation behind a user's interests BIBREF0. Based on releasing of large text corpus on social media and the emotion categories proposed by BIBREF1, BIBREF2, numerous models have provided and achieved fabulous precision so far. For example, DeepMoji BIBREF3 which utilized transfer learning concept to enhance emotions and sarcasm understanding behind the target sentence. CARER BIBREF4 learned contextualized affect representations to make itself more sensitive to rare words and the scenario behind the texts.
As methods become mature, text-based emotion detecting applications can be extended from a single utterance to a dialogue contributed by a series of utterances. Table TABREF2 illustrates the difference between single utterance and dialogue emotion recognition. The same utterances in Table TABREF2, even the same person said the same sentence, the emotion it convey may be various, which may depend on different background of the conversation, tone of speaking or personality. Therefore, for emotion detection, the information from preceding utterances in a conversation is relatively critical.
In SocialNLP 2019 EmotionX, the challenge is to recognize emotions for all utterances in EmotionLines dataset, a dataset consists of dialogues. According to the needs for considering context at the same time, we develop two classification models, inspired by bidirectional encoder representations from transformers (BERT) BIBREF5, FriendsBERT and ChatBERT. In this paper, we introduce our approaches including causal utterance modeling, model pre-training, and fine-turning.
According to the needs for considering context at the same time, we develop two classification models, inspired by bidirectional encoder representations from transformers (BERT), FriendsBERT and ChatBERT.
|
What does this paper develop?
|
Develop two classification models, inspired by bidirectional encoder representations from transformers (BERT), FriendsBERT and ChatBERT.
|
null | false
| 72
|
In this paper we aim to use text as well as other features to see how they complement each other in a deep learning model. In the stance classification domain, previous work has showed that text features are limited, suggesting that adding extra-linguistic constraints could improve performance BIBREF6 , BIBREF7 , BIBREF8 . For example, Hasan and Ng as well as Thomas et al. require that posts written by the same author have the same stance BIBREF9 , BIBREF10 . The addition of this constraint yields accuracy improvements of 1–7% for some models and datasets. Hasan and Ng later added user-interaction constraints and ideology constraints BIBREF7 : the former models the relationship among posts in a sequence of replies and the latter models inter-topic relationships, e.g., users who oppose abortion could be conservative and thus are likely to oppose gay rights.
For work focusing on online forum text, since posts are linked through user replies, sequential labeling methods have been used to model relationships between posts. For example, Hasan and Ng use hidden Markov models (HMMs) to model dependent relationships to the preceding post BIBREF9 ; Burfoot et al. use iterative classification to repeatedly generate new estimates based on the current state of knowledge BIBREF11 ; Sridhar et al. use probabilistic soft logic (PSL) to model reply links via collaborative filtering BIBREF12 . In the Facebook dataset we study, we use comments instead of reply links. However, as the ultimate goal in this paper is predicting not comment stance but post stance, we treat comments as extra information for use in predicting post stance.
For example, Hasan and Ng use hidden Markov models (HMMs) to model dependent relationships to the preceding post (Hasan and Ng, 2013b);
|
What model do Hasan and Ng use to model-dependent relationships to the preceding post (Hasan and Ng, 2013b)?
|
Hidden Markov models (HMMs).
|
1811.11136
| true
| null |
We train our models on Sentiment140 and Amazon product reviews. Both of these datasets concentrates on sentiment represented by a short text. Summary description of other datasets for validation are also as below:
We have provided baseline results for the accuracy of other models against datasets (as shown in Table 1 ) For training the softmax model, we divide the text sentiment to two kinds of emotion, positive and negative. And for training the tanh model, we convert the positive and negative emotion to [-1.0, 1.0] continuous sentiment score, while 1.0 means positive and vice versa. We also test our model on various models and calculate metrics such as accuracy, precision and recall and show the results are in Table 2 . Table 3 , Table 4 , Table 5 , Table 6 and Table 7 . Table 8 are more detail information with precisions and recall of our models against other datasets.
We train our models on Sentiment140 and Amazon product reviews. Both of these datasets concentrates on sentiment represented by a short text.
For training the softmax model, we divide the text sentiment to two kinds of emotion, positive and negative. And for training the tanh model, we convert the positive and negative emotion to [-1.0, 1.0] continuous sentiment score, while 1.0 means positive and vice versa.
|
Was the introduced LSTM+CNN model trained on annotated data in a supervised fashion?
|
Yes.
|
2002.11893
| false
| null |
Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.
Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances.
Dialogue Collection: before the formal data collection starts, we required the workers to make a small number of dialogues and gave them feedback about the dialogue quality. Then, well-trained workers were paired to converse according to the given goals. The workers were also asked to annotate both user states and system states.
Dialogue Annotation: we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories. To evaluate the quality of the annotation of dialogue acts and states, three experts were employed to manually annotate dialogue acts and states for 50 dialogues. The results show that our annotations are of high quality. Finally, each dialogue contains a structured goal, a task description, user states, system states, dialogue acts, and utterances.
|
How was the corpus annotated?
|
The answers are shown as follows:
* The workers were also asked to annotate both user states and system states
* we used some rules to automatically annotate dialogue acts according to user states, system states, and dialogue histories
|
null | false
| null |
If you have one of the following issues, should you consult a doctor, a lawyer, or a plumber: a leaky faucet, a persistent cough, a small claims court summons, a torn meniscus, a urinary tract infection, a broken toilet, a federal indictment, or a search warrant from the FBI.
|
If you have a leaky faucet or a broken toilet, you should call a plumber. If you have a persistent cough, a torn meniscus or a urinary tract infection, you should consult a physician. If you have received a small claims court summons or a federal indictment, or a search warrant from the FBI, you will probably want to lawyer up.
|
|
null | false
| null |
What are the new 7 Wonders of the World:
|
These are the new 7 Wonders of the World:
- Great Wall of China
- Petra
- Christ the Redeemer
- Machu Picchu
- Chichén Itzá
- Colosseum
- Taj Mahal
|
|
null | false
| null |
New Orleans is known for specialties including beignets (locally pronounced like "ben-yays"), square-shaped fried dough that could be called "French doughnuts" (served with café au lait made with a blend of coffee and chicory rather than only coffee); and po' boy and Italian muffuletta sandwiches; Gulf oysters on the half-shell, fried oysters, boiled crawfish and other seafood; étouffée, jambalaya, gumbo and other Creole dishes; and the Monday favorite of red beans and rice (Louis Armstrong often signed his letters, "Red beans and ricely yours"). Another New Orleans specialty is the praline locally, a candy made with brown sugar, granulated sugar, cream, butter, and pecans. The city offers notable street food including the Asian inspired beef Yaka mein.
|
List the sweet culinary specialities of New Orleans from this provided passage
|
The sweet culinary specialities of New Orleans include:
- Beignets: Square shaped fried dough similar to donuts
- Praline: A candy made with sugar, cream and pecans
|
null | false
| null |
Who is Mother Teresa?
|
Mother Teresa was was an Albanian-Indian Catholic nun and the founder of the Missionaries of Charity. She was born in 1910 and died in September of 1997. When she was 18, moved to Ireland and then to India, where she lived most of her life.
The Missionaries of charity manages homes for people who are dying of HIV/AIDS, leprosy, and tuberculosis. They also run soup kitchens, dispensaries, mobile clinics, children's and family counseling programs, as well as orphanages and schools. Members of this congregation take vows of chastity, poverty, and obedience and also profess a fourth vow: to give "wholehearted free service to the poorest of the poor." Pope Francis canonized her St. Teresa of Calcutta on September 4th, 2016 in St. Peter's Square in Vatican City.
|
|
null | false
| null |
The appearance of a pickleball court, and the manner of play, resembles tennis, but the court is the size of a doubles badminton court, less than a third the size of a tennis court. Court lines and rules are specific to pickleball and include two 7-foot areas (2.1 m) on either side of the net known as the non-volley zones, where the ball cannot be hit with the paddle unless the ball bounces first. The official rules specify side-out scoring, where only the serving team can score a point. All serves are made with an underhand stroke. The hard plastic ball used in pickleball produces significantly less bounce than softer flexible balls, such as a tennis ball. The minimal bounce combined with the non-volley zone and underhand serve give the game dynamic pace and strategy ranging from soft dink shots that stay within the two non-volley zones to powerful drive shots and overhead smash shots. To minimize any advantage the serving or receiving side might have at the beginning of the game, the ball must bounce once on each side of the net before either team may volley the ball, which is to hit the ball in the air before it bounces.
|
Given this paragraph, how big are the non-volley zones in pickleball and where are they located?
|
7-foot areas on either side of the net known
|
null | false
| null |
How do you live a life full of gratitude?
|
Each day we are faced with both positive and negative experiences. As humans we tend to focus on the negative experiences, but it is important to recognize that the challenges in life make us stronger and shape us into who we are. To have more gratitude it's important to be grateful for even the small things in life as well. Setting aside time each day to write down the things or people in your life that you're grateful for can help shift your mindset, and train your brain towards gratitude.
|
|
null | false
| null |
Give me a list of the top five most followed Instagram profiles, along with their number of followers.
|
1. @Instagram, 623 millions
2. @Cristiano, 567 millions
3. @leomessi, 448 millions
4. @selenagomez, 407 millions
5. @kyliejenner, 383 millions
|
|
null | false
| null |
Write a facebook post responding to a question about whether public or private school is better
|
I really think it comes down to personal preference and the type of kid(s) you have. I don't think there is a universal one size fits all approach here. There's a lot of factors that can go into the decision, whether you live in a good school district, if your kid is special needs or struggles socially, whether you can afford private school, on and on. For us, we decided to go public because our school district is relatively good. Our school is close to our house and our kids do relatively well in school. So far, we've been happy. My one complaint is that there is not a ton of personal attention so I do think it requires a lot of parent involvement to keep kids on track. Whatever you decide, you just need to weigh what is right for your family.
|
|
null | false
| null |
Tell me whether these are famous as actors or authors: Russell Crowe, Tom Hanks, Jamie Lee Curtis, Joanne Woodward, Sara Jessica Parker, Tom Cruise, Alexander Dumas, JK Rowling, Ingrid Bergman, Grace Kelly, Judi Dench, Kate Winslet, Jodie Foster, Doris Day, William Shakespeare, Jalal al-Din Muhammad Rumi
|
The famous actors from this list are, Russell Crowe, Tom Hanks, Jamie Lee Curtis, Joanne Woodward, Sara Jessica Parker, Tom Cruise, Ingrid Bergman, Grace Kelly, Judi Dench, Kate Winslet, Jodie Foster and Doris Day. The famous authors are, Alexander Dumas, JK Rowling, William Shakespeare and Jalal Al-Din Muhammad Rumi.
|
|
null | false
| null |
Why are cookies delicious?
|
Cookies are delicious typically because they contain fat (oils usually), sugar, and salt in addition to carbohydrates from flour, many ingredients of which human taste buds enjoy. They are also baked, adding the heat element, which chemically alters them to be more palatable.
|
|
null | false
| null |
Give me a list of cities where I can stop/stay as I drive from Austin to Chicago by road
|
There could be multiple itineraries for a road trip from Austin to Chicago, as it's a 1000+ mile drive, but some of the prominent cities that you drive through are
Dallas / Fort Worth, Texas
Little Rock, Arkansas
Memphis, Tennessee
St Louis, Missouri
|
|
null | false
| null |
What is Docker?
|
Docker takes away repetitive, mundane configuration tasks and is used throughout the development lifecycle for fast, easy and portable application development – desktop and cloud. Docker’s comprehensive end to end platform includes UIs, CLIs, APIs and security that are engineered to work together across the entire application delivery lifecycle.
|
|
null | false
| null |
A diploma is a document awarded by an educational institution (such as a college or university) testifying the recipient has graduated by successfully completing their courses of studies. Historically, it has also referred to a charter or official document of diplomacy.
|
What is the significance of a diploma?
|
A diploma testifies the recipient has graduated by successfully completing their courses of studies.
|
null | false
| null |
Classify the following items as either cars or trucks: Ford F150, Audi A8, BMW 318i, Nissan Titan, Tesla Model 3
|
Ford F150 - truck; Audi A8 - car; BMW 318i - car; Nissan Titan - truck; Tesla Model 3 - car
|
|
null | false
| null |
What are the drawbacks of nuclear fusion?
|
List of Cons of Nuclear Fusion
1. It is extremely difficult to achieve.
In stars, strong gravitational forces and high temperatures naturally create a fusion environment. But here on Earth, we are facing the challenge to make nuclear fuel hot and confined enough to start a self-sustaining ignition.
Imagine trying to contain the plasma (a gaseous mixture of deuterium, tritium atoms and ions, and helium the fusion product) at 100 million degrees celsius. No material can withstand that temperature. So, scientists attempt to keep the plasma (being electrically charged and having a magnetic field of its own) suspended in a magnetic field produced by superconducting magnets around the fusion chamber/vessel. This is similar to how bullet trains float on their tracks at ridiculous speeds. This process is very difficult to achieve (as compared to nuclear FISSION).
2. It produces radioactive waste.
Though nuclear power plants only emit negligible amounts of carbon dioxide into the atmosphere, its nuclear fuel chain process does produce radioactive waste.
The radioactive waste produced with fusion is not the same as with fission, and the two are often confused. With a nuclear fission reactor, the radiation is alpha particles, beta particles, and gamma rays (which can penetrate your skin and break apart the bonds in your DNA structure, giving you all kinds of cancer). In contrast, in a nuclear fusion reactor, the vessel wall is the only part that will be bombarded by the high energy neutrons, and if, in the worst case, all the protective layers surrounding the main fusion vessel fail, the neutron radiation will stop as soon as fusion reaction stops. In a fission reactor, the cancer-causing radiation still exists even in the waste materials, which means that extreme measures are needed to burry the waste to keep it as far away as possible from humans. In the case of nuclear fusion, the activated materials (i.e., the metal vessels which have been bombarded by neutrons) can be stored safely for about 100 years, after which the radiation level becomes so low that they can be reused in the fusion reactor again.
|
|
null | false
| 215
|
Neural machine translation (NMT) is popularly implemented as an encoder-decoder framework BIBREF0, in which the encoder is right in charge of source sentence representation. Typically, the input sentence is implicitly represented as a contextualized source representation through deep learning networks. By further feeding the decoder, the source representation is used to learn dependent time-step context vectors for predicting target translation BIBREF1.
In state-of-the-art Transformer-based encoder, self-attention mechanisms are good at capturing the general information in a sentence BIBREF2, BIBREF3, BIBREF4. However, it is difficult to distinguish which kind of information lying deeply under the language is really salient for learning source representation. Intuitively, when a person reads a source sentence, he/she often selectively focuses on the basic sentence meaning, and re-reads the entire sentence to understand its meaning completely. Take the English sentence in Table TABREF2 as an example. We manually annotate its basic meaning as a shorter sequence of words than in the original sentence, called backbone information. Obviously, these words with the basic meaning contain more important information for human understanding than the remaining words in the sentence. We argue that such backbone information is also helpful for learning source representation, and is not explicitly considered by the existing NMT system to enrich the source sentence representation.
In this paper, we propose a novel explicit sentence compression approach to enhance the source representation for NMT. To this end, we first design three sentence compression models to accommodate the needs of various languages and scenarios, including supervised, unsupervised, and semi-supervised ways, to learn a backbone information words sequence (as shown in Table TABREF2) from the source sentence. We then propose three translation models, including backbone source-side fusion based NMT (BSFNMT), backbone target-side fusion (BTFNMT), and both-side fusion based NMT (BBFNMT), to introduce this backbone knowledge into the existing Transformer NMT system for improving translation predictions. Empirical results on the WMT14 English-to-German and English-to-French translation tasks show that the proposed approach significantly improves the translation performance over the strong even state-of-the-art NMT baselines.
We then propose three translation models, including backbone source-side fusion based NMT (BSFNMT), backbone target-side fusion (BTFNMT), and both-side fusion based NMT (BBFNMT), to introduce this backbone knowledge into the existing Transformer NMT system for improving translation predictions.
|
What are the three translation models they propose?
|
Backbone source-side fusion based NMT (BSFNMT), backbone target-side fusion (BTFNMT), and both-side fusion based NMT (BBFNMT).
|
null | false
| null |
Who invented the first car?
|
Karl Benz invented the first car in 1896 AD. It was a petrol-powered engine designed to travel slowly. Made entirely by hand, the number of cars on roads did not surge as the supply was limited. However, the first popular car was the Model T, built-in 1908 by Ford in the USA. Ford sold over 15 million units of Model T worldwide.
|
|
null | false
| null |
Explain the best way to swing a golf iron compared to a golf driver.
|
When hitting an iron you want to make a downward striking motion in the grass. The driver is more of a sweeping motion through the ball. Overall, shorten the back swing on the driver and it will help gain control.
|
|
null | false
| null |
Lightwater is a village in the Surrey Heath district of Surrey, England, about 27 miles (43 km) southwest of central London. Immediately surrounding towns and villages include Bagshot, Deepcut, Windlesham, Camberley, and West End, Woking. It is bounded to the north-west by the M3 motorway and to the north-east by the "Lightwater Bypass" (A322). To the south-east it is bounded by the "Red Road" (B311), while to the south-west there are vehicle-testing tracks owned by the Ministry of Defence.
The village is adjacent to junction 3 of the M3 motorway. The nearest railway station is at Bagshot, on the line between Ascot and Aldershot. Lightwater is part of the civil parish of Windlesham, which has a population of 17,000 and also includes the neighbouring village of Bagshot.
|
Which district is Lightwater in?
|
Lightwater is in Surrey Heath district in Surrey. It is in the civil parish of Windlesham.
|
null | false
| 106
|
This section presents experiments conducted on the annotated corpora introduced in section SECREF4 . We put the main focus on identifying argument components in the discourse. To comply with the machine learning terminology, in this section we will use the term domain as an equivalent to a topic (remember that our dataset includes six different topics; see section SECREF38 ).
We evaluate three different scenarios. First, we report ten-fold cross validation over a random ordering of the entire data set. Second, we deal with in-domain ten-fold cross validation for each of the six domains. Third, in order to evaluate the domain portability of our approach, we train the system on five domains and test on the remaining one for all six domains (which we report as cross-domain validation).
We evaluate three different scenarios.
|
How many scenarios did the authors evaluate?
|
three
|
null | false
| null |
Why is Brazilian jiu jitsu an excellent martial art?
|
Brazilian jiu jitsu (BJJ) is an excellent martial art for a number of reasons. BJJ is, first and foremost, a generally safe martial art. BJJ does not emphasize or focus on striking techniques like punching and kicking, rather, BJJ focuses on use of leverage and body mechanics to control and subdue an opponent. BJJ is accessible to almost anyone regardless of age, sex, or fitness level as the practitioner can determine how relaxed or aggressive to be during practicing the art. BJJ is an excellent form of exercise and promotes cardiovascular fitness, flexibility, strength, endurance, and an overall sense of well-being. Further, there are approaches to BJJ that resonate with people who are casual hobbyists as well as those that seek international competition- and all levels of studends across that spectru. BJJ is easily a life-long endeavor that is safe, engaging, and also an excellent complement to striking based martial arts such as karate, kickboxing, and boxing.
|
|
null | false
| null |
Write an friendly, convincing e-mail to my landlord asking whether I can get a dog.
|
Dear Sir/Madam,
I hope this email finds you well.
I'm reaching out with a request regarding the property I rent from you. I'm interested in getting myself a dog and I noticed there wasn't a clause in our rental agreement specifying if this was allowed or not so I would like to consult you about that.
I work from home most days of the week and have a very laidback lifestyle, so I'm looking for a dog that can keep me company but also take me out of the house for walks. I'm looking into a small breed that is also calm and friendly - likely a french bulldog. They are easy to train and known for spending most of their day sleeping, this is also a very quiet breed that doesn't have a lot of complains of barking.
Is that something you would permit?
I am also happy to hop on a call or meet in person to discuss further in case you have any follow-up questions.
Looking forward to hearing from you soon.
Kind regards
|
|
null | false
| null |
How is it that some high net-worth individuals are able to avoid paying taxes completely?
|
The Internal Revenue Service (IRS) taxes regular income and investment income differently. For one type of common investment income i.e long term capital gains, the tax rates are 0%, 15% and 20% based on the annual income of the individual. The income used for calculating the above rates doesn't include income from the long term capital gains itself. For example, if an individual makes less than $41,675 for the year 2022 through regular income, his or her capital gains tax rate would be 0% even if the long term capital gains itself is more than a million dollars. Assuming many of these high net-worth individuals don't have any regular income and all their income is through long term capital gains, they end up paying no taxes at all.
|
|
null | false
| null |
Commercial Hindi cinema grew in the 1980s, with films such as Ek Duuje Ke Liye (1981), Disco Dancer (1982), Himmatwala (1983), Tohfa (1984), Naam (1986), Mr India (1987), and Tezaab (1988).
In the late 1980s,[timeframe?] Hindi cinema experienced another period of stagnation, with a decline in box office turnout, due to increasing violence, decline in musical melodic quality, and rise in video piracy, leading to middle-class family audiences abandoning theatres. The turning point came with Indian blockbuster Disco Dancer (1982) which began the era of disco music in Indian cinema. Lead actor Mithun Chakraborty and music director Bappi Lahiri had the highest number of mainstream Indian hit movies that decade. At the end of the decade, Yash Chopra's Chandni (1989) created a new formula for Bollywood musical romance films, reviving the genre and defining Hindi cinema in the years that followed. Commercial Hindi cinema grew in the late 1980s and 1990s, with the release of Mr. India (1987), Qayamat Se Qayamat Tak (1988), Chaalbaaz (1989), Maine Pyar Kiya (1989), Lamhe (1991), Saajan (1991), Khuda Gawah (1992), Khalnayak (1993), Darr (1993), Hum Aapke Hain Koun..! (1994), Dilwale Dulhaniya Le Jayenge (1995), Dil To Pagal Hai (1997), Pyar Kiya Toh Darna Kya (1998) and Kuch Kuch Hota Hai (1998). Cult classic Bandit Queen (1994) directed by Shekhar Kapur received international recognition and controversy.
|
From the below text extract the year and list all movies released in that year separated by a comma in the form {Year} - {all movies released in that year} and show this as a bullet list
|
Here are the years and the movies released in that year from the given text:
1981 - Ek Duuje Ke Liye
1982 - Disco Dancer
1983 - Himmatwala
1984 - Tohfa
1986 - Naam
1987 - Mr India
1988 - Tezaab, Qayamat Se Qayamat Tak,
1989 - Chandni, Chaalbaaz, Maine Pyar Kiya
1991 - Lamhe, Saajan
1992 - Khuda Gawah
1993 - Khalnayak, Darr
1994 - Hum Aapke Hain Koun..!, Bandit Queen
1995 - Dilwale Dulhaniya Le Jayenge
1997 - Dil To Pagal Hai
1998 - Pyar Kiya Toh Darna Kya, Kuch Kuch Hota Hai
|
null | false
| null |
Modern pacemakers usually have multiple functions. The most basic form monitors the heart's native electrical rhythm. When the pacemaker wire or "lead" does not detect heart electrical activity in the chamber – atrium or ventricle – within a normal beat-to-beat time period – most commonly one second – it will stimulate either the atrium or the ventricle with a short low voltage pulse. If it does sense electrical activity, it will hold off stimulating. This sensing and stimulating activity continues on a beat by beat basis and is called "demand pacing". In the case of a dual-chamber device, when the upper chambers have a spontaneous or stimulated activation, the device starts a countdown to ensure that in an acceptable – and programmable – interval, there is an activation of the ventricle, otherwise again an impulse will be delivered.
|
How do cardiac pacemakers work?
|
Pacemakers or cardiac pacemakers usually refer to artificial cardiac pacemakers, since our body has a natural cardiac pacemaker.
The artificial cardiac pacemakers work by detecting electric signals through the conductive leads that are attached to the inside of heart walls, usually in atriums, and, in the absence of an electrical signal, generate one to stimulate the desired heart area, either an atrium or a ventricle.
|
null | false
| null |
The British monarchy traces its origins from the petty kingdoms of Anglo-Saxon England and early medieval Scotland, which consolidated into the kingdoms of England and Scotland by the 10th century. England was conquered by the Normans in 1066, after which Wales also gradually came under the control of Anglo-Normans. The process was completed in the 13th century when the Principality of Wales became a client state of the English kingdom. Meanwhile, Magna Carta began the process of reducing the English monarch's political powers. In the 16th century, English and Scottish monarchs played a central role in what became the religious English Reformation and Scottish Reformation. From 1603, the English and Scottish kingdoms were ruled by a single sovereign. From 1649 to 1660, the tradition of monarchy was broken by the republican Commonwealth of England, which followed the Wars of the Three Kingdoms. Following the installation of William III and Mary II as co-monarchs in the Glorious Revolution, the Bill of Rights 1689, and its Scottish counterpart the Claim of Right Act 1689, further curtailed the power of the monarchy and excluded Roman Catholics from succession to the throne. In 1707, the kingdoms of England and Scotland were merged to create the Kingdom of Great Britain, and in 1801, the Kingdom of Ireland joined to create the United Kingdom of Great Britain and Ireland. The British monarch was the nominal head of the vast British Empire, which covered a quarter of the world's land area at its greatest extent in 1921.
|
Without quoting directly from the text, briefly describe how the United Kingdom was formed from its constituent countries.
|
The United Kingdom is now made up of England, Wales, Scotland and Northern Ireland. These countries were finally united in 1801, after England and Scotland (which had merged in 1707) were joined by the Kingdom of Ireland.
|
null | false
| null |
Which is a species of fish? Blowfish or Toucan
|
Blowfish
|
|
2002.05058
| false
| null |
Evaluation of NLG models has been a long-standing open problem. While human evaluation may be ideal, it is generally expensive to conduct and does not scale well. Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below.
Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models. They evaluate generated text by comparing the similarity between the generated text and human written references. While this works well in tasks where the diversity of acceptable output is limited, such as machine translation and text summarization, text overlap metrics are shown to have weak or no correlation with human judgments in open domain natural language generation tasks BIBREF8. There are two major drawbacks in these metrics. First, text overlap metrics can not distinguish minor variations in a generated text which may make the sentence not equally grammatically correct or semantically meaningful. Second, there may exist multiple equally good outputs for the given input and comparing against one gold reference can be erroneous.
Perplexity is commonly used to evaluate the quality of a language model. It measures how well a probability distribution predicts a sample and captures the degree of uncertainty in the model. It is used to evaluate models in open-domain NLG tasks such as story generation BIBREF2 and open domain dialogue systems. However, “how likely a sentence is generated by a given model” may not be comparable across different models and does not indicate the quality of the sentence.
Parameterized Metrics learn a parameterized model to evaluate generated text. Adversarial evaluation models BIBREF11, BIBREF12 assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. However, training such a discriminator can be difficult as the binary classification task can be easily over-fitted and leads to poor generalizability BIBREF11. Moreover, the information we get from the discriminator accuracy is limited as we can not compare the quality of two generated sentences when they both succeed or fail in fooling the discriminator. Recent study shows that the discriminator accuracy does not correlate well with human preference BIBREF13. Automated Dialogue Evaluation Model (ADEM) BIBREF14 is another parameterized metric proposed for dialogue system evaluation. It learns to score a generated dialogue response based on the context and the human written reference. However, it requires human-annotated scores for generated sentences. It is generally hard to design appropriate questions for crowdsourcing these scores, which makes the annotation very expensive to get and the inter-annotator agreement score is only moderate BIBREF14. As a result, the training data is limited and noisy, which makes the scoring task even harder. It can be problematic when comparing models with similar quality. In addition, this model is designed only for evaluating dialogue response generation models. More recently, embedding similarity based metrics such as HUSE BIBREF15 and BERTScore BIBREF16. These metrics alleviate the first problem of text overlap metrics by modeling semantic similarity better. However, they can not address the response diversity problem and thus are only suitable for machine translation and text summarization.
Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below.
Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models.
Perplexity is commonly used to evaluate the quality of a language model.
Parameterized Metrics learn a parameterized model to evaluate generated text.
|
What previous automated evalution approaches authors mention?
|
The answers are shown as follows:
* Text Overlap Metrics, including BLEU
* Perplexity
* Parameterized Metrics
|
null | false
| null |
The use of speed limits predates both motorized vehicles, and enforcement of the laws. Facing the invention of the automobile, many nations enacted speed limit laws, and appropriate measures to enforce them. The Locomotive Acts in the UK set speed limits for vehicles, and later codified enforcement methods. The first Locomotive Act, passed in 1861, set a speed limit of 10 miles per hour (16 km/h) in uninhabited areas, and 5 miles per hour (8.0 km/h) within towns. This act also included the value of fines for violations of the law.
|
Given this paragraph about speed limits, what was first law regulating speed limits in UK, when was it enacted and what speed limits it set.
|
The first speed limit law in UK was called Locomotive Act, passed in 1861, set a speed limit of 10 miles per hour (16 km/h) in uninhabited areas, and 5 miles per hour (8.0 km/h) within towns.
|
null | false
| null |
What is an interchange fee?
|
Interchange fee is a term used in the payment card industry to describe a fee paid between banks for the acceptance of card-based transactions. Usually for sales/services transactions it is a fee that a merchant's bank (the "acquiring bank") pays a customer's bank (the "issuing bank").
In a credit card or debit card transaction, the card-issuing bank in a payment transaction deducts the interchange fee from the amount it pays the acquiring bank that handles a credit or debit card transaction for a merchant. The acquiring bank then pays the merchant the amount of the transaction minus both the interchange fee and an additional, usually smaller, fee for the acquiring bank or independent sales organization (ISO), which is often referred to as a discount rate, an add-on rate, or passthru. For cash withdrawal transactions at ATMs, however, the fees are paid by the card-issuing bank to the acquiring bank (for the maintenance of the machine).
|
|
null | false
| null |
The Indianapolis 500, formally known as the Indianapolis 500-Mile Race, and commonly called the Indy 500, is an annual automobile race held at Indianapolis Motor Speedway (IMS) in Speedway, Indiana, United States, an enclave suburb of Indianapolis. The event is traditionally held over Memorial Day weekend, usually the last weekend of May. It is contested as part of the IndyCar Series, the top level of American open-wheel car racing, a formula colloquially known as "Indy car racing". The track itself is nicknamed the "Brickyard", as the racing surface was paved in brick in the fall of 1909. One yard of brick remains exposed at the start/finish line. The event, billed as The Greatest Spectacle in Racing, is considered part of the Triple Crown of Motorsport along with the 24 Hours of Le Mans and the Monaco Grand Prix, with which it typically shares a date.
|
What is the Indianapolis 500?
|
The Indianapolis 500, formally known as the Indianapolis 500-Mile Race, and commonly called the Indy 500, is an annual automobile race held at Indianapolis Motor Speedway (IMS) in Speedway, Indiana, United States, an enclave suburb of Indianapolis. The event is traditionally held over Memorial Day weekend, usually the last weekend of May. It is contested as part of the IndyCar Series, the top level of American open-wheel car racing, a formula colloquially known as "Indy car racing". The track itself is nicknamed the "Brickyard", as the racing surface was paved in brick in the fall of 1909. One yard of brick remains exposed at the start/finish line. The event, billed as The Greatest Spectacle in Racing, is considered part of the Triple Crown of Motorsport along with the 24 Hours of Le Mans and the Monaco Grand Prix, with which it typically shares a date.
The inaugural race was held in 1911 and was won by Ray Harroun. The event celebrated its 100th anniversary in 2011, and the 100th running was held in 2016. The event was put on hiatus twice, from 1917 to 1918 due to World War I and from 1942 to 1945 due to World War II. In two different periods, the race was part of FIA World Championships; between 1925 and 1928, the World Manufacturers' Championship and between 1950 and 1960, the World Drivers' Championship.
Marcus Ericsson is the current champion. The most successful drivers are A. J. Foyt, Al Unser Sr., Rick Mears and Hélio Castroneves, each of whom has won the race four times. The active driver with the most victories is Hélio Castroneves. Rick Mears holds the record for most career pole positions with six. The most successful car owner is Roger Penske, owner of the Indianapolis Motor Speedway and Team Penske, which has 18 total wins and 18 poles. Penske also has five wins at the IndyCar Grand Prix, held on the combined road course.
The event is steeped in tradition, in pre-race ceremonies, post-race celebrations, and race procedure. The most noteworthy and most popular traditions are the 33-car field lining up three-wide for the start, the annual singing of the chorus of "Back Home Again in Indiana," and the victory lane bottle of milk. Also unique is that qualifying requires the driver to complete four, rather than one, timed laps. Qualifying has a separate weekend. The official attendance is not disclosed by Speedway management, but the permanent seating capacity is upwards of 250,000, and infield patrons raise the race-day attendance to approximately 300,000.
|
null | false
| null |
A post office called Glasgow was established in 1853, and remained in operation until 1905.The community was named after Glasgow, in Scotland, the ancestral home of a large share of the first settlers.
|
What is the significance of Glasgow, Georgia?
|
It is a community named after Glasgow in Scotland which is the ancestral home of a large part of the first settlers.
|
null | false
| null |
Help me understand how I would grow a garden in my backyard
|
Growing a garden in your backyard requires yearlong dedication. Gardens can be composed of a variety of different types of plants and trees, depending on the size of the garden. Some plants are seasonal and must be re-planted every year. To plant such plants in your backyard, you will need to plant them usually in the winter or spring seasons. Then, you will need to regularly water those plants so that they bloom. Other plants and trees can be planted once and continue to grow year-round. These kinds of trees need to only be planted once and cared for regularly with water and fertilizer.
|
|
null | false
| null |
What does Lady Gaga affectionately call her fans?
|
Little monsters.
|
|
1602.03661
| false
| null |
In this paper we have investigated duality of patterning at the lexicon level. We have quantified in particular the notions of combinatoriality and compositionality as observed in real languages as well as in a large-scale dataset produced in the framework of a web-based word association experiment BIBREF1 . We have paralleled this empirical analysis with a modeling scheme, the Blending Game, whose aim is that of identifying the main determinants for the emergence of duality of patterning in language. We analyzed the main properties of the lexicon emerged from the Blending Game as a function of the two parameters of the model, the graph connectivity $p_{link}$ and the memory scale $\tau $ . We found that properties of the emerging lexicon related to the combinatoriality, namely the words length distribution, the frequency of use of the different forms and a measure for the combinatoriality itself, reflect both qualitatively and quantitatively the corresponding properties as measured in human languages, provided that the memory parameter $\tau $ is sufficiently high, that is that a sufficiently high effort is required in order to understand and learn brand new forms. Conversely, the compositional properties of the lexicon are related to the parameter $p_{link}$ , that is a measure of the level of structure of the conceptual graph. For intermediate and low values of $p_{link}$ , semantic relations between objects are more differentiated with respect to the situation of a more dense graph, in which every object is related to anyone else, and compositionality is enhanced. In summary, while the graph connectivity strongly affects the compositionality of the lexicon, noise in communication strongly affects the combinatoriality of the lexicon.
We found that properties of the emerging lexicon related to the combinatoriality, namely the words length distribution, the frequency of use of the different forms and a measure for the combinatoriality itself, reflect both qualitatively and quantitatively the corresponding properties as measured in human languages, provided that the memory parameter $\tau $ is sufficiently high, that is that a sufficiently high effort is required in order to understand and learn brand new forms.
|
What empirical data are the Blending Game predictions compared to?
|
The answers are shown as follows:
* words length distribution, the frequency of use of the different forms and a measure for the combinatoriality
|
null | false
| null |
What are the five MLB teams with the most World Series wins in descending order?
|
New York Yankees, St. Louis Cardinals, Oakland Athletics, Boston Red Sox, San Francisco Giants
|
|
null | false
| null |
How Can I become a Referee?
|
To become a Football Association (FA) referee you need to be at least 14-years-old and a resident in England. Contact your County FA for further information.
Note: You would need to attend a Basic Referees Course with your local County FA to train as a qualified referee.
|
|
null | false
| null |
Which of the big five is the most dangerous animals in Africa ?
|
The African Buffalo is notorious for killing the most number of hunters.
|
|
null | false
| null |
What in business terms is the IMF
|
International Monetary Fund
|
|
1910.06748
| false
| null |
FLOAT SELECTED: Table 2. Twitter corpus distribution by language label.
We begin by filtering the corpus to keep only those tweets where the user's self-declared language and the tweet's detected language correspond; that language becomes the tweet's correct language label. This operation cuts out roughly half the tweets, and leaves us with a corpus of about 900 million tweets in 54 different languages. Table TABREF6 shows the distribution of languages in that corpus. Unsurprisingly, it is a very imbalanced distribution of languages, with English and Japanese together accounting for 60% of all tweets. This is consistent with other studies and statistics of language use on Twitter, going as far back as 2013. It does however make it very difficult to use this corpus to train a LID system for other languages, especially for one of the dozens of seldom-used languages. This was our motivation for creating a balanced Twitter dataset.
FLOAT SELECTED: Table 2. Twitter corpus distribution by language label.
We begin by filtering the corpus to keep only those tweets where the user's self-declared language and the tweet's detected language correspond; that language becomes the tweet's correct language label. This operation cuts out roughly half the tweets, and leaves us with a corpus of about 900 million tweets in 54 different languages. Table TABREF6 shows the distribution of languages in that corpus.
|
What languages are represented in the dataset?
|
EN, JA, ES, AR, PT, KO, TH, FR, TR, RU, IT, DE, PL, NL, EL, SV, FA, VI, FI, CS, UK, HI, DA, HU, NO, RO, SR, LV, BG, UR, TA, MR, BN, IN, KN, ET, SL, GU, CY, ZH, CKB, IS, LT, ML, SI, IW, NE, KM, MY, TL, KA, BO
|
null | false
| 164
|
The variables (both predictors and outcomes) are rarely simply binary or categorical. For example, a study on language use and age could focus on chronological age (instead of, e.g., social age BIBREF19 ). However, even then, age can be modeled in different ways. Discretization can make the modeling easier and various NLP studies have modeled age as a categorical variable BIBREF20 . But any discretization raises questions: How many categories? Where to place the boundaries? Fine distinctions might not always be meaningful for the analysis we are interested in, but categories that are too broad can threaten validity. Other interesting variables include time, space, and even the social network position of the author. It is often preferable to keep the variable in its most precise form. For example, BIBREF21 perform exploration in the context of hypothesis testing by using latitude and longitude coordinates — the original metadata attached to geotagged social media such as tweets — rather than aggregating into administrative units such as counties or cities. This is necessary when such administrative units are unlikely to be related to the target concept, as is the case in their analysis of dialect differences. Focusing on precise geographical coordinates also makes it possible to recognize fine-grained effects, such as language variation across the geography of a city.
Using a particular classification scheme means deciding which variations are visible, and which ones are hidden BIBREF22 . We are looking for a categorization scheme for which it is feasible to collect a large enough labeled document collection (e.g., to train supervised models), but which is also fine-grained enough for our purposes. Classification schemes rarely exhibit the ideal properties, i.e., that they are consistent, their categories are mutually exclusive, and that the system is complete BIBREF22 . Borderline cases are challenging, especially with social and cultural concepts, where the boundaries are often not clear-cut. The choice of scheme can also have ethical implications BIBREF22 . For example, gender is usually represented as a binary variable in NLP and computational models tend to learn gender-stereotypical patterns. The operationalization of gender in NLP has been challenged only recently BIBREF23 , BIBREF24 , BIBREF25 .
Supervised and unsupervised learning are the most common approaches to learning from data. With supervised learning, a model learns from labeled data (e.g., social media messages labeled by sentiment) to infer (or predict) these labels from unlabeled texts. In contrast, unsupervised learning uses unlabeled data. Supervised approaches are especially suitable when we have a clear definition of the concept of interest and when labels are available (either annotated or native to the data). Unsupervised approaches, such as topic models, are especially useful for exploration. In this setting, conceptualization and operationalization may occur simultaneously, with theory emerging from the data BIBREF26 . Unsupervised approaches are also used when there is a clear way of measuring a concept, often based on strong assumptions. For example, BIBREF3 measure “surprise” in an analysis of Darwin's reading decisions based on the divergence between two probability distributions.
From an analysis perspective, the unit of text that we are labeling (or annotating, or coding), either automatic or manual, can sometimes be different than one's final unit of analysis. For example, if in a study on media frames in news stories, the theoretical framework and research question point toward frames at the story level (e.g., what is the overall causal analysis of the news article?), the story must be the unit of analysis. Yet it is often difficult to validly and reliably code a single frame at the story level. Multiple perspectives are likely to sit side-by-side in a story. Thus, an article on income inequality might point to multiple causes, such as globalization, education, and tax policies. Coding at the sentence level would detect each of these causal explanations individually, but this information would need to be somehow aggregated to determine the overall story-level frame. Sometimes scholars solve this problem by only examining headlines and lead paragraphs, arguing that based on journalistic convention, the most important information can be found at the beginning of a story. However, this leads to a return to a shorter, less nuanced analysis.
From a computational perspective, the unit of text can also make a huge difference, especially when we are using bag-of-words models, where word order within a unit does not matter. Small segments, like tweets, sometimes do not have enough information to make their semantic context clear. In contrast, larger segments, like novels, have too much variation, making it difficult to train focused models. Finding a good segmentation sometimes means combining short documents and subdividing long documents. The word “document" can therefore be misleading. But it is so ingrained in the common NLP lexicon that we use it anyway in this article.
For insight-driven text analysis, it is often critical that high-level patterns can be communicated. Furthermore, interpretable models make it easier to find spurious features, to do error analysis, and to support interpretation of results. Some approaches are effective for prediction, but harder to interpret. The value we place on interpretability can therefore influence the approach we choose. There is an increasing interest in developing interpretable or transparent models in the NLP and machine learning communities.
Small segments, like tweets, sometimes do not have enough information to make their semantic context clear. In contrast, larger segments, like novels, have too much variation, making it difficult to train focused models.
|
From a computational perspective, what are the flaws of small segments and larger segments to be analyzed in the model?
|
Small segments, like tweets, sometimes do not have enough information to make their semantic context clear. In contrast, larger segments, like novels, have too much variation, making it difficult to train focused models.
|
null | false
| 83
|
Disclaimer: Due to the nature of the paper, some examples contain highly offensive language and hate speech. They don't reflect the views of the authors in any way, and the point of the paper is to help fight such speech. Much recent interest has focused on the detection of offensive language and hate speech in online social media. Such language is often associated with undesirable online behaviors such as trolling, cyberbullying, online extremism, political polarization, and propaganda. Thus, offensive language detection is instrumental for a variety of application such as: quantifying polarization BIBREF0, BIBREF1, trolls and propaganda account detection BIBREF2, detecting the likelihood of hate crimes BIBREF3; and predicting conflict BIBREF4. In this paper, we describe our methodology for building a large dataset of Arabic offensive tweets. Given that roughly 1-2% of all Arabic tweets are offensive BIBREF5, targeted annotation is essential for efficiently building a large dataset. Since our methodology does not use a seed list of offensive words, it is not biased by topic, target, or dialect. Using our methodology, we tagged 10,000 Arabic tweet dataset for offensiveness, where offensive tweets account for roughly 19% of the tweets. Further, we labeled tweets as vulgar or hate speech. To date, this is the largest available dataset, which we plan to make publicly available along with annotation guidelines. We use this dataset to characterize Arabic offensive language to ascertain the topics, dialects, and users' gender that are most associated with the use of offensive language. Though we suspect that there are common features that span different languages and cultures, some characteristics of Arabic offensive language is language and culture specific. Thus, we conduct a thorough analysis of how Arabic users use offensive language. Next, we use the dataset to train strong Arabic offensive language classifiers using state-of-the-art representations and classification techniques. Specifically, we experiment with static and contextualized embeddings for representation along with a variety of classifiers such as a deep neural network classifier and Support Vector Machine (SVM).
The contributions of this paper are as follows:
We built the largest Arabic offensive language dataset to date that includes special tags for vulgar language and hate speech. We describe the methodology for building it along with annotation guidelines.
We performed thorough analysis of the dataset and described the peculiarities of Arabic offensive language.
We experimented with Support Vector Machine classifiers on character and word ngrams classification techniques to provide strong results on Arabic offensive language classification.
We experimented with Support Vector Machine classifiers on character and word ngrams classification techniques to provide strong results on Arabic offensive language classification.
|
What classifiers did the authors use when doing the experiment?
|
Support Vector Machine classifiers
|
null | false
| null |
What country most recently joined NATO?
|
Finland is the most recent country to join NATO, formally becoming a member on 4 April 2023. It is the 31st country to join the North Atlantic Treaty Organization.
|
|
1912.00582
| false
| null |
An LM's overall performance on *X can be measured simply by taking the proportion of correct predictions across the 67,000 minimal pairs from all paradigms. GPT-2 achieves the highest score and the $n$-gram the lowest. Transformer-XL and the LSTM LM perform in the middle, and at roughly the same level as each other. All models perform well below estimated human agreement (as described in Section SECREF11). The $n$-gram model's poor overall performance confirms *X is not solvable from co-occurrence information alone. Rather, success at *X is driven by the more abstract features learned by neural networks. There are no categories in which the $n$-gram approaches human performance.
We report the 12-category accuracy results for all models and human evaluation in Table TABREF14.
GPT-2 achieves the highest score and the $n$-gram the lowest. Transformer-XL and the LSTM LM perform in the middle, and at roughly the same level as each other.
We report the 12-category accuracy results for all models and human evaluation in Table TABREF14.
|
What is the performance of the models on the tasks?
|
Overall accuracy per model is: 5-gram (60.5), LSTM (68.9), TXL (68.7), GPT-2 (80.1)
|
null | false
| null |
Why was the 2023 Women's NCAA basketball championship controversial?
|
LSU beat Iowa to win the NCAA women's basketball championship in slightly controversial game. Many fans and pundits observed overly tight officiating resulting in many of the key stars on both teams with quick fouls and spending time on the bench altering the flow of the game requiring more bench players to play than expected. Particularly impactful was a questionable technical foul on Iowa player Caitlin Clark in the second half. Additionally, near the end of the game Angel Reece the LSU star mocked Iowa start Clark as her team won mimicking her soon to be placed championship ring on her hand. The 'trash talk' drove discussion on sportsmanship online among fans and journalists
|
|
null | false
| 78
|
Social media platforms have made the spreading of fake news easier, faster as well as able to reach a wider audience. Social media offer another feature which is the anonymity for the authors, and this opens the door to many suspicious individuals or organizations to utilize these platforms. Recently, there has been an increased number of spreading fake news and rumors over the web and social media BIBREF0. Fake news in social media vary considering the intention to mislead. Some of these news are spread with the intention to be ironic or to deliver the news in an ironic way (satirical news). Others, such as propaganda, hoaxes, and clickbaits, are spread to mislead the audience or to manipulate their opinions. In the case of Twitter, suspicious news annotations should be done on a tweet rather than an account level, since some accounts mix fake with real news. However, these annotations are extremely costly and time consuming – i.e., due to high volume of available tweets Consequently, a first step in this direction, e.g., as a pre-filtering step, can be viewed as the task of detecting fake news at the account level.
The main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts.
In this work, we investigate the problem of discriminating between factual and non-factual accounts in Twitter. To this end, we collect a large dataset of tweets using a list of propaganda, hoax and clickbait accounts and compare different versions of sequential chunk-based approaches using a variety of feature sets against several baselines. Several approaches have been proposed for news verification, whether in social media (rumors detection) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, or in news claims BIBREF5, BIBREF6, BIBREF7, BIBREF8. The main orientation in the previous works is to verify the textual claims/tweets but not their sources. To the best of our knowledge, this is the first work aiming to detect factuality at the account level, and especially from a textual perspective. Our contributions are:
[leftmargin=4mm]
We propose an approach to detect non-factual Twitter accounts by treating post streams as a sequence of tweets' chunks. We test several semantic and dictionary-based features together with a neural sequential approach, and apply an ablation test to investigate their contribution.
We benchmark our approach against other approaches that discard the chronological order of the tweets or read the tweets individually. The results show that our approach produces superior results at detecting non-factual accounts.
Social media platforms have made the spreading of fake news easier, faster as well as able to reach a wider audience. Social media offer another feature which is the anonymity for the authors, and this opens the door to many suspicious individuals or organizations to utilize these platforms. Recently, there has been an increased number of spreading fake news and rumors over the web and social media BIBREF0. Fake news in social media vary considering the intention to mislead. Some of these news are spread with the intention to be ironic or to deliver the news in an ironic way (satirical news). Others, such as propaganda, hoaxes, and clickbaits, are spread to mislead the audience or to manipulate their opinions. In the case of Twitter, suspicious news annotations should be done on a tweet rather than an account level, since some accounts mix fake with real news. However, these annotations are extremely costly and time consuming – i.e., due to high volume of available tweets Consequently, a first step in this direction, e.g., as a pre-filtering step, can be viewed as the task of detecting fake news at the account level.
The main obstacle for detecting suspicious Twitter accounts is due to the behavior of mixing some real news with the misleading ones. Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). Our hypothesis is that suspicious accounts have a unique pattern in posting tweet sequences. Since their intention is to mislead, the way they transition from one set of tweets to the next has a hidden signature, biased by their intentions. Therefore, reading these tweets in chunks has the potential to improve the detection of the fake news accounts.
In this work, we investigate the problem of discriminating between factual and non-factual accounts in Twitter. To this end, we collect a large dataset of tweets using a list of propaganda, hoax and clickbait accounts and compare different versions of sequential chunk-based approaches using a variety of feature sets against several baselines. Several approaches have been proposed for news verification, whether in social media (rumors detection) BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, or in news claims BIBREF5, BIBREF6, BIBREF7, BIBREF8. The main orientation in the previous works is to verify the textual claims/tweets but not their sources. To the best of our knowledge, this is the first work aiming to detect factuality at the account level, and especially from a textual perspective. Our contributions are:
[leftmargin=4mm]
We propose an approach to detect non-factual Twitter accounts by treating post streams as a sequence of tweets' chunks. We test several semantic and dictionary-based features together with a neural sequential approach, and apply an ablation test to investigate their contribution.
We benchmark our approach against other approaches that discard the chronological order of the tweets or read the tweets individually. The results show that our approach produces superior results at detecting non-factual accounts.
Consequently, we investigate ways to detect suspicious accounts by considering their tweets in groups (chunks). ****In this work, we investigate the problem of discriminating between factual and non-factual accounts in Twitter.
|
How do the authors investigate the problem?
|
By considering tweets in groups (chunks).
|
null | false
| null |
What are some of the most common social norms in the United States?
|
While social norms can vary between regions, some of the most common social norms to most regions in the United States include:
- Greeting people when meeting
- Saying please and thank you
- Holding the door open for someone
- Speaking quietly in a library or other quiet space
- Tipping servers at restaurants and bars
|
|
null | false
| 376
|
There has been a surge of recent interest in detecting human actions in videos. Work in this space has mainly focused on learning actions from articulated human pose BIBREF7 , BIBREF8 , BIBREF9 or mining spatial and temporal information from videos BIBREF10 , BIBREF11 . A number of resources have been produced, including Action Bank BIBREF12 , NTU RGB+D BIBREF13 , SBU Kinect Interaction BIBREF14 , and PKU-MMD BIBREF15 .
Most research on video action detection has gathered video information for a set of pre-defined actions BIBREF2 , BIBREF16 , BIBREF1 , an approach known as explicit data gathering BIBREF0 . For instance, given an action such as “open door,” a system would identify videos that include a visual depiction of this action. While this approach is able to detect a specific set of actions, whose choice may be guided by downstream applications, it achieves high precision at the cost of low recall. In many cases, the set of predefined actions is small (e.g., 203 activity classes in BIBREF2 ), and for some actions, the number of visual depictions is very small.
An alternative approach is to start with a set of videos, and identify all the actions present in these videos BIBREF17 , BIBREF18 . This approach has been referred to as implicit data gathering, and it typically leads to the identification of a larger number of actions, possibly with a small number of examples per action.
In this paper, we use an implicit data gathering approach to label human activities in videos. To the best of our knowledge, we are the first to explore video action recognition using both transcribed audio and video information. We focus on the popular genre of lifestyle vlogs, which consist of videos of people demonstrating routine actions while verbally describing them. We use these videos to develop methods to identify if actions are visually present.
The paper makes three main contributions. First, we introduce a novel dataset consisting of 1,268 short video clips paired with sets of actions mentioned in the video transcripts, as well as manual annotations of whether the actions are visible or not. The dataset includes a total of 14,769 actions, 4,340 of which are visible. Second, we propose a set of strong baselines to determine whether an action is visible or not. Third, we introduce a multimodal neural architecture that combines information drawn from visual and linguistic clues, and show that it improves over models that rely on one modality at a time.
By making progress towards automatic action recognition, in addition to contributing to video understanding, this work has a number of important and exciting applications, including sports analytics BIBREF19 , human-computer interaction BIBREF20 , and automatic analysis of surveillance video footage BIBREF21 .
The paper is organized as follows. We begin by discussing related work, then describe our data collection and annotation process. We next overview our experimental set-up and introduce a multimodal method for identifying visible actions in videos. Finally, we discuss our results and conclude with general directions for future work.
First, we introduce a novel dataset consisting of 1,268 short video clips paired with sets of actions mentioned in the video transcripts, as well as manual annotations of whether the actions are visible or not.
|
How many short videos are clipped in their novel dataset?
|
1,268 short video clips.
|
null | false
| null |
Give me a list of the greatest chess players of all time
|
The greatest chess players of all time include Garry Kasparov, Magnus Carlsen, and Bobby Fischer.
|
|
null | false
| null |
The Tampa Convention Center is a mid-sized convention center located in downtown Tampa, Florida at the mouth of the Hillsborough River. It has both waterfront views of Tampa Bay and views of the city's skyline. Harbour Island is across the eponymous bridge on the other side of the Garrison Channel. The center is connected to the neighboring Channelside District and Ybor City via the TECO Line Streetcar, which has a station across the street. The center opened in 1990 and encompasses 600,000 square feet (56,000 m2) in total. It has a 200,000 square feet (19,000 m2) exhibit hall, a ballroom capable of accommodating over 2,000 guests, and 36 meeting rooms that can be adjusted to various sizes. The facility hosts over 300 events per year.
|
Given a reference text bout the Tampa Convention Center, when did it open and how big is it?
|
The Tampa Convention Center opened in 1990 and is about 600,000 square feet.
|
null | false
| null |
What religion is the oldest in the world?
|
The oldest religion in the world is Hinduism. Also known as Sanatana Dharma, Hinduism is one of the major religions in the modern world with over a billion followers. Hinduism contains scriptures dating back 3,000 years ago, and roots and customs dating back over 4,000 years ago. As the world's third largest religion, Hinduism is referred to as a natural religion because of its universal applicability and ability to be understood through study, reason, and experience.
|
|
1706.08032
| true
| null |
Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .
Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.
Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 .
Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus. BIBREF0 reported the results of Maximum Entropy (MaxEnt), NB, SVM on STS Corpus having good performance in previous time. The model of BIBREF5 is a state-of-the-art so far by using a CharSCNN. As can be seen, 86.63 is the best prediction accuracy of our model so far for the STS Corpus.
For Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR). The ENS model is combined with bag-of-words (BoW), feature hashing (FH) and lexicons. The model of BIBREF14 is a state-of-the-art on Sanders and HCR datasets. Our models outperform the model of BIBREF14 for the Sanders dataset and HCR dataset.
Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .
Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.
Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 .
Table IV shows the result of our model for sentiment classification against other models. We compare our model performance with the approaches of BIBREF0 BIBREF5 on STS Corpus.
For Sanders and HCR datasets, we compare results with the model of BIBREF14 that used a ensemble of multiple base classifiers (ENS) such as NB, Random Forest (RF), SVM and Logistic Regression (LR).
|
Are results reported only on English datasets?
|
Yes.
|
1910.00825
| true
| null |
CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities. For example, in news summarization the proper nouns are the critical information to retain.
CIC is a suitable complementary metric to ROUGE because it accounts for the most important information within each dialog domain. CIC can be applied to any summarization task with predefined essential entities.
|
Is new evaluation metric extension of ROGUE?
|
No.
|
null | false
| null |
Which of the following are women scientists: Marie Curie, Elizabeth Blackwell, Jane Goodall, Rosalind Franklin, Ada Lovelace, Maria Mitchell, Archimedes, Albert Einstein, Ernest Rutherford, Alfred Nobel, Alexander Fleming, Michael Faraday, Niels Bohr, Stephen Hawking, Alan Turing.
|
Marie Curie, Elizabeth Blackwell, Jane Goodall, Rosalind Franklin, Ada Lovelace, Maria Mitchell are women scientists.
|
|
null | false
| null |
What are the words of House Tully?
|
"Family, Duty, Honor."
|
|
null | false
| null |
The Battle of Thermopylae (/θərˈmɒpɪliː/ thər-MOP-i-lee; Greek: Μάχη τῶν Θερμοπυλῶν, Máchē tōn Thermopylōn) was fought in 480 BC between the Achaemenid Persian Empire under Xerxes I and an alliance of Greek city-states led by Sparta under Leonidas I. Lasting over the course of three days, it was one of the most prominent battles of both the second Persian invasion of Greece and the wider Greco-Persian Wars.
The engagement at Thermopylae occurred simultaneously with the Battle of Artemisium: between July and September 480 BC. The second Persian invasion under Xerxes I was a delayed response to the failure of the first Persian invasion, which had been initiated by Darius I and ended in 490 BC by an Athenian-led Greek victory at the Battle of Marathon. By 480 BC, a decade after the Persian defeat at Marathon, Xerxes had amassed a massive land and naval force, and subsequently set out to conquer all of Greece. In response, the Athenian politician and general Themistocles proposed that the allied Greeks block the advance of the Persian army at the pass of Thermopylae while simultaneously blocking the Persian navy at the Straits of Artemisium.
Around the start of the invasion, a Greek force of approximately 7,000 men led by Leonidas marched north to block the pass of Thermopylae. Ancient authors vastly inflated the size of the Persian army, with estimates in the millions, but modern scholars estimate it at between 120,000 and 300,000 soldiers. They arrived at Thermopylae by late August or early September; the outnumbered Greeks held them off for seven days (including three of direct battle) before their rear-guard was annihilated in one of history's most famous last stands. During two full days of battle, the Greeks blocked the only road by which the massive Persian army could traverse the narrow pass. After the second day, a local resident named Ephialtes revealed to the Persians the existence of a path leading behind the Greek lines. Subsequently, Leonidas, aware that his force was being outflanked by the Persians, dismissed the bulk of the Greek army and remained to guard their retreat along with 300 Spartans and 700 Thespians. It has been reported that others also remained, including up to 900 helots and 400 Thebans. With the exception of the Thebans, most of whom reportedly surrendered, the Greeks fought the Persians to the death.
Themistocles was in command of the Greek naval force at Artemisium when he received news that the Persians had taken the pass at Thermopylae. Since the Greek defensive strategy had required both Thermopylae and Artemisium to be held, the decision was made to withdraw to the island of Salamis. The Persians overran Boeotia and then captured the evacuated city of Athens. The Greek fleet—seeking a decisive victory over the Persian armada—attacked and defeated the invading force at the Battle of Salamis in late 480 BC. Wary of being trapped in Europe, Xerxes withdrew with much of his army to Asia, reportedly losing many of his troops to starvation and disease while also leaving behind the Persian military commander Mardonius to continue the Achaemenid Empire's Greek campaign. However, the following year saw a Greek army decisively defeat Mardonius and his troops at the Battle of Plataea, ending the second Persian invasion.
Both ancient and modern writers have used the Battle of Thermopylae as a flagship example of the power of an army defending its native soil. The performance of the Greek defenders is also used as an example of the advantages of training, equipment, and use of terrain as force multipliers.
|
Given a reference text about the Battle of Thermopylae, tell me when the battle was fought, who the battle was between, how many Greek and Persian forces there were, how the Persian army was able to flank the Greek forces and who won the battle?
|
The Battle of Thermopylae was fought in 480 BC. The battle was fought between the Persian Empire, led by Xerxes I, and a combination of allied Greek city-states led by Leonidas I of Sparta. There were approximately 7,000 Greek men led by Leonidas I and an estimated 120,000 - 300,000 Persian soldiers led by Xerxes I. The Persian army was able to outflank the Greek army because a local person named Ephialtes informed the Persians that there was an unguarded path that lead behind the Greek lines. The Greek army made one final stand but ultimately were unsuccessful and were over taken by the Persian army.
|
null | false
| null |
Was the new deal successful or a failure to solve the problems of the Great Depression?
|
The new deal was a success to solve the problems caused by the Great Depression because he took action quickly, he passed many acts, and he kept the moral up for the people. Franklin Roosevelt writes "This is no usolvable prroblem if we face it wisely and courageously. It can be accomplished in part by direct recruiting by the Government itself, treating the task as we would treat the emergency of a war." Roosevelt is saying that unlike Hoover, he was going to take action on what was happening. Anything he will do will be treated as if it was war because it was affecting them as much as war would. The New Deal Legislation shows that Franklin Roosevelt had a god relationship with Congress. 11 new acts were passed in 6 years while Rosevelt was in office. This helps people get jobs, lower prices, and help people get money they deserve. Roosevelt says "This great Nation will endure as it has endured, will revive and will prosper. This shows that Rosevelt knows how to lead people even in the hard time. He knew while it was bad, everyone will look to him, so he can't seem scared and he can't kill the moral of the people. With all of this, Roosevelt was able to lead the country away from the Great Depression and started to get them out of it.
|
|
null | false
| null |
What is the The Science Laureates of the United States Act of 2013
|
The Science Laureates of the United States Act of 2013 (H.R. 1891) is a bill that would create the position of Science Laureate of the United States, meant to be similar to the United States Poet Laureate. The Science Laureate would spend their term traveling around the country promoting science and research to students and the general public. The bill was introduced into the United States House of Representatives during the 113th United States Congress.
|
|
2002.06675
| false
| null |
In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets. Thereafter, the total sizes of the development and test sets turns out to be 1585 IPUs spanning 2 hours 23 minutes and 1841 IPUs spanning 2 hours and 48 minutes respectively. The ASR model is trained with the rest data. In the speaker-open condition, all the data except for the test speaker's were used for training As it would be difficult to train the model if all of the data of speaker KM or UT were removed, experiments using their speaker-open conditions were not conducted.
In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets. Thereafter, the total sizes of the development and test sets turns out to be 1585 IPUs spanning 2 hours 23 minutes and 1841 IPUs spanning 2 hours and 48 minutes respectively. The ASR model is trained with the rest data. In the speaker-open condition, all the data except for the test speaker's were used for training As it would be difficult to train the model if all of the data of speaker KM or UT were removed, experiments using their speaker-open conditions were not conducted.
|
What is the difference between speaker-open and speaker-closed setting?
|
The answers are shown as follows:
* In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets.
* In the speaker-open condition, all the data except for the test speaker's were used for training
|
null | false
| 105
|
Accuracy in any machine learning tasks can be enhanced either by improving a machine learning model or by using more in-domain training data. Current state of the art models BIBREF6 , BIBREF7 , BIBREF8 , BIBREF11 improve over AS Reader's accuracy on CBT NE and CN datasets by 1-2 percent absolute. This suggests that with current techniques there is only limited room for improvement on the algorithmic side.
The other possibility to improve performance is simply to use more training data. The importance of training data was highlighted by the frequently quoted Mercer's statement that “There is no data like more data.” The observation that having more data is often more important than having better algorithms has been frequently stressed since then BIBREF13 , BIBREF14 .
As a step in the direction of exploiting the potential of more data in the domain of text comprehension, we created a new dataset called BookTest similar to, but much larger than the widely used CBT and CNN/DM datasets.
As a step in the direction of exploiting the potential of more data in the domain of text comprehension, we created a new dataset called BookTest similar to, but much larger than the widely used CBT and CNN/DM datasets.
|
Is their dataset a new dataset?
|
Yes, it is similar to CBT and CNN/DM datasets, but much larger.
|
null | false
| null |
Did Ghana lose by 100 against India in a soccer game?
|
This game actually didn't happen but Ghanaians and the entire nation some believe it did and tell stories about it. Our parents describe the game as very intriguing filled with mythical black magic and powers. It is believe that the soccer ball turned in the course of the game turned into different sorts of objects every time Ghana had possession. This however aided the Indian team to put up 100 goals whereas Ghana only scored 1. Lastly, legend has it that the Ghanaian player that scored the only goal for Ghana died afterwards
|
|
null | false
| null |
Which of these sentences is correct: "If I was you, I would do it" or "If I were you, I would do it"?
|
The correct sentence is "If I were you, I would do it"." This is the subjunctive form of English. It should be used when a hypothetical is contrary to fact. In this case, "I" is not "you," so the subjunctive form should be used. In first person, the form of "be" used for the subjunctive mood is "were."
|
|
null | false
| null |
The Alabama Crimson Tide football program represents the University of Alabama (variously Alabama, UA, or Bama) in the sport of American football. The team competes in the Football Bowl Subdivision (FBS) of the National Collegiate Athletic Association (NCAA) and the Western Division of the Southeastern Conference (SEC). The team's head coach is Nick Saban, who has led the Tide to six national championships over his tenure. The Crimson Tide is among the most storied and decorated football programs in NCAA history. Since beginning play in 1892, the program claims 18 national championships, including 13 wire-service (AP or Coaches') national titles in the poll-era, and five other titles before the poll-era. From 1958 to 1982, the team was led by Hall of Fame coach Paul "Bear" Bryant, who won six national titles with the program. Despite numerous national and conference championships, it was not until 2009 that an Alabama player received a Heisman Trophy, when running back Mark Ingram II became the university's first winner. In 2015, Derrick Henry became the university's second Heisman winner. The Crimson Tide won back to back Heisman trophies in 2020 and 2021, with DeVonta Smith and Bryce Young.
|
How many Alabama football players have won the Heisman?
|
The Alabama Crimson Tide football program has produced four Heisman trophy winners. The first Crimson Tide player to win the Heisman was running back Mark Ingram II in 2009. Running back Derrick Henry won the Heisman trophy in 2015. Alabama had back-to-back Heisman trophy winners in 2020 and 2021, DeVonta Smith and Bryce Young.
|
null | false
| 263
|
With the wealth of information being posted online daily, Relation Extraction (RE) has become increasingly important. RE aims specifically to extract relations from raw sentences and represent them as succinct relation tuples of the form (head, relation, tail). An example is (Barack Obama, spouse, Michelle Obama).
The concise representations provided by RE models have been used to extend Knowledge Bases (KBs) BIBREF0, BIBREF1. These KBs are then used heavily in NLP systems, such as Task-Based Dialogue Systems. In recent years, much focus in the NRE community has been centered on improvements in model precision and the reduction of noise BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6. Yet, little attention has been devoted towards the fairness of such systems.
In this paper, we take the first step at understanding and evaluating gender bias in NRE systems. We analyze gender bias by measuring the differences in model performance when extracting relations from sentences written about females versus sentences written about males. Significant discrepancies in performance between genders could diminish the fairness of systems and distort outcomes in applications that use them. For example, if a model predicts the occupation relation for with higher recall for male entities, this could lead to KBs having more occupation information for males. Downstream search tasks using that KB could produce biased predictions, such as ranking articles about female computer scientists below articles about their male peers.
We provide the first evaluation of social bias in NRE models; specifically, we evaluate gender bias in English language predictions of a collection of popularly used and open source NRE models BIBREF2, BIBREF4, BIBREF3, BIBREF5. We evaluate OpenNRE on two fronts: (1) examining Equality of Opportunity BIBREF7 when OpenNRE is trained on an unmodified dataset and (2) examining the effect that various debiasing options BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12 have on both absolute F1 score and the difference in F1-scores on male and female datapoints.
However, carrying out such an evaluation is difficult with existing NRE datasets, such as the NYT dataset from BIBREF13, because there is no reliable way to obtain gender information about the entities. Thus, we create a new dataset specifically aimed at evaluating gender bias for NRE, just as prior work has done for other tasks like Coreference Resolution BIBREF14, BIBREF9. We call our dataset WikiGenderBias and make it publicly available. Our contributions are as such:
WikiGenderBias is the first dataset aimed at training and evaluating NRE systems for gender bias. It contains ground truth labels for the test set and about 45,000 sentences in total.
We provide the first evaluation of NRE systems for gender bias and find that it exhibits gender bias.
We demonstrate that using both gender-swapping and debiased embeddings effectively mitigates bias in the model's predictions and that using genderswapping improves the model's performance when the training data contains contextual biases.
In this paper, we take the first step at understanding and evaluating gender bias in NRE systems.
|
What do they evaluate in their paper?
|
Gender bias in Relation Extraction (RE) models.
|
null | false
| null |
What is a personalized sales email?
|
A personalized sales email is an email written to a specific person at a company. This type of email is tailored to a prospective customer in a way that touches upon why you are reaching out to that person and why you are reaching out to that person right now. An initial personalized email aims to spark interest in potential buyers, hoping they agree to discuss the value they might gain by purchasing your offering.
|
|
2003.07433
| false
| null |
There are many clinically validated PTSD assessment tools that are being used both to detect the prevalence of PTSD and its intensity among sufferers. Among all of the tools, the most popular and well accepted one is Domain-Specific Risk-Taking (DOSPERT) Scale BIBREF15. This is a psychometric scale that assesses risk taking in five content domains: financial decisions (separately for investing versus gambling), health/safety, recreational, ethical, and social decisions. Respondents rate the likelihood that they would engage in domain-specific risky activities (Part I). An optional Part II assesses respondents' perceptions of the magnitude of the risks and expected benefits of the activities judged in Part I. There are more scales that are used in risky behavior analysis of individual's daily activities such as, The Berlin Social Support Scales (BSSS) BIBREF16 and Values In Action Scale (VIAS) BIBREF17. Dryhootch America BIBREF18, BIBREF19, a veteran peer support community organization, chooses 5, 6 and 5 questions respectively from the above mentioned survey systems to assess the PTSD among war veterans and consider rest of them as irrelevant to PTSD. The details of dryhootch chosen survey scale are stated in Table TABREF13. Table!TABREF14 shows a sample DOSPERT scale demographic chosen by dryhootch. The threshold (in Table TABREF13) is used to calculate the risky behavior limits. For example, if one individual's weekly DOSPERT score goes over 28, he is in critical situation in terms of risk taking symptoms of PTSD. Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )
High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.
Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.
Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.
No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD.
Dryhootch defines the intensity of PTSD into four categories based on the weekly survey results of all three clinical survey tools (DOSPERT, BSSS and VIAS )
High risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for all three PTSD assessment tools i.e. DOSPERT, BSSS and VIAS, then he/she is in high risk situation which needs immediate mental support to avoid catastrophic effect of individual's health or surrounding people's life.
Moderate risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any two of the three PTSD assessment tools, then he/she is in moderate risk situation which needs close observation and peer mentoring to avoid their risk progression.
Low risk PTSD: If one individual veteran's weekly PTSD assessment scores go above the threshold for any one of the three PTSD assessment tools, then he/she has light symptoms of PTSD.
No PTSD: If one individual veteran's weekly PTSD assessment scores go below the threshold for all three PTSD assessment tools, then he/she has no PTSD.
|
How is the intensity of the PTSD established?
|
defined into four categories from high risk, moderate risk, to low risk
|
null | false
| null |
How do you make a Negroni?
|
Here is how to make a Negroni cocktail:
Ingredients:
- Gin
- Campari
- Sweet Vermouth
- Orange peel
- Ice for mixing
- Ice for serving; Single large ice "rock" generally preferred
Bar tools needed:
- Mixing glass
- Mixing spoon
- Strainer
- Glass for serving; preferably a cocktail tumbler style glass
Making the cocktail
- Place the ice (or ice rock) in the serving glass
- Pour 1 ounce each of the gin, Campari, and vermouth into the mixing glass
- Add 5-10 ice cubes to the mixing glass
- Using the mixing spoon, stir the ingredients vigorously for 20-30 seconds
- Using the strainer, pour the cocktail into the serving glass
- Twist the orange peel to release oil and aroma; rub orange peel around rim of glass; place orange peel into the cocktail
- Enjoy!
If you enjoy the negroni, you can experiment with the ingredients. Many enjoy mezcal as a replacement for the gin. Also, you can swap out Campari for other bitter style aperitifs; St. George's Bruto Americano is a good example of a possible substitute.
|
|
null | false
| null |
Lionel Messi is an Argentine professional footballer who has represented the Argentina national football team as a forward since his debut in 2005. Since then, Messi has scored 102 goals in 174 international appearances, making him the country's all-time top scorer; he surpassed Gabriel Batistuta's record of 54 goals with a free kick against the United States in the semi-final of the Copa América Centenario on 21 June 2016. He also holds the record for most goals by a South American male, surpassing Pelé‘s 77 goals with a hat-trick against Bolivia in September 2021. Messi made his debut for Argentina in a 2–1 away win over Hungary on 17 August 2005, and scored his first international goal a year later in his sixth appearance, against Croatia.
Messi's goal against Serbia and Montenegro, on 16 June 2006, at the age of 18 years and 357 days, made him the youngest-ever scorer for Argentina at a FIFA World Cup. He has scored nine international hat-tricks, and has netted twice in a match on ten occasions. In a June 2022 friendly against Estonia, Messi scored five goals in a match for Argentina for the first time. Out of all his opponents, Messi has scored the most against Bolivia, netting eight goals in total.
|
What are some highlights of Lionel Messi's career goals?
|
He is his country's all-time top scorer with 102 goals in 174 international appearances. He also is the South American male with most goals, surpassing Pelé's 77 goals. He also was the youngest-ever scorer for Argentina at a Fifa World Cup. He has scored nine international hat-tricks, and scored five times in a match for Argentina for the first time against Estonia in June 2022.
|
null | false
| null |
Identify which instrument is string or percussion: Erxian, Alfaia
|
Erxian is string, Alfaia is percussion.
|
|
null | false
| null |
Tell me whether these animals are a canine or a feline: French Bulldog, Siamese Cat, Sharpei
|
The french bulldog is a canine, the Siamese Cat is a feline and the Sharpei is a canine.
|
|
null | false
| 358
|
Deep neural models recently have achieved remarkable results in computer vision BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , and a range of NLP tasks such as sentiment classification BIBREF4 , BIBREF5 , BIBREF6 , and question-answering BIBREF7 . Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) especially Long Short-term Memory Network (LSTM), are used wildly in natural language processing tasks. With increasing datas, these two methods can reach considerable performance by requiring only limited domain knowledge and easy to be finetuned to specific applications at the same time.
CNNs, which have the ability of capturing local correlations of spatial or temporal structures, have achieved excellent performance in computer vision and NLP tasks. And recently the emerge of some new techniques, such as Inception module BIBREF8 , Batchnorm BIBREF9 and Residual Network BIBREF3 have also made the performance even better. For sentence modeling, CNNs perform excellently in extracting n-gram features at different positions of a sentence through convolutional filters.
RNNs, with the ability of handling sequences of any length and capturing long-term dependencies, , have also achieved remarkable results in sentence or document modeling tasks. LSTMs BIBREF10 were designed for better remembering and memory accesses, which can also avoid the problem of gradient exploding or vanishing in the standard RNN. Be capable of incorporating context on both sides of every position in the input sequence, BLSTMs introduced in BIBREF11 , BIBREF12 have reported to achieve great performance in Handwriting Recognition BIBREF13 , and Machine Translation BIBREF14 tasks.
Generative adversarial networks (GANs) BIBREF15 are a class of generative models for learning how to produce images. Basically, GANs consist of a generator G and a discriminator D, which are trained based on game theory. G maps a input noise vector to an output image, while D takes in an image then outputs a prediction whether the input image is a sample generated by G. Recently, applications of GANs have shown that they can generate promising results BIBREF16 , BIBREF17 . Several recent papers have also extended GANs to the semi-supervised context BIBREF18 , BIBREF19 by simply increasing the dimension of the classifier output from INLINEFORM0 to INLINEFORM1 , which the samples of the extra class are generated by G.
In this paper, We proposed an end-to-end architecture named AC-BLSTM by combining the ACNN with the BLSTM for sentences and documents modeling. In order to make the model deeper, instead of using the normal convolution, we apply the technique proposed in BIBREF8 which employs a INLINEFORM0 convolution followed by a INLINEFORM1 convolution by spatial factorizing the INLINEFORM2 convolution. And we use the pretrained word2vec vectors BIBREF20 as the ACNN input, which were trained on 100 billion words of Google News to learn the higher-level representations of n-grams. The outputs of the ACNN are organized as the sequence window feature to feed into the multi-layer BLSTM. So our model does not rely on any other extra domain specific knowledge and complex preprocess, e.g. word segmentation, part of speech tagging and so on. We evaluate AC-BLSTM on sentence-level and document-level tasks including sentiment analysis, question type classification, and subjectivity classification. Experimental results demonstrate the effectiveness of our approach compared with other state-of-the-art methods. Further more, inspired by the ideas of extending GANs to the semi-supervised learning context by BIBREF18 , BIBREF19 , we propose a semi-supervised learning framework for text classification which further improve the performance of AC-BLSTM.
The rest of the paper is organized as follows. Section 2 presents a brief review of related work. Section 3 discusses the architecture of our AC-BLSTM and our semi-supervised framework. Section 4 presents the experiments result with comparison analysis. Section 5 concludes the paper.
Further more, inspired by the ideas of extending GANs to the semi-supervised learning context by (Odena, 2016; Salimans et al., 2016), we propose a semi-supervised learning framework for text classification which further improve the performance of AC-BLSTM.
|
Does the semi-supervised learning framework for text classification further improve the performance of AC-BLSTM?
|
Yes, it does.
|
null | false
| null |
Impact wrenches are widely used in many industries, such as automotive repair, heavy equipment maintenance, product assembly, major construction projects, and any other instance where a high torque output is needed. For product assembly, a pulse tool is commonly used, as it features a reactionless tightening while reducing the noise levels the regular impacts suffer from. Pulse tools use oil as a medium to transfer the kinetic energy from the hammer into the anvil. This gives a smoother impulse, a slightly lower torque to weight ratio and a possibility to design a shut off mechanism that shuts the tool down when achieving the correct torque. Pulse tools are not referred to as "impact wrenches" as the performance and technology are not the same.
|
Why are pulse tools used in product assembly over impact wrenches?
|
Pulse tools are used for product assembly because they have reactionless tightening and are able to reduce noise levels
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.