paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| 86
|
The focus of this section is on recently published datasets and LID research applicable to the South African context. An in depth survey of algorithms, features, datasets, shared tasks and evaluation methods may be found in BIBREF0.
The datasets for the DSL 2015 & DSL 2017 shared tasks BIBREF1 are often used in LID benchmarks and also available on Kaggle . The DSL datasets, like other LID datasets, consists of text sentences labelled by language. The 2017 dataset, for example, contains 14 languages over 6 language groups with 18000 training samples and 1000 testing samples per language.
The recently published JW300 parallel corpus BIBREF2 covers over 300 languages with around 100 thousand parallel sentences per language pair on average. In South Africa, a multilingual corpus of academic texts produced by university students with different mother tongues is being developed BIBREF3. The WiLI-2018 benchmark dataset BIBREF4 for monolingual written natural language identification includes around 1000 paragraphs of 235 languages. A possibly useful link can also be made BIBREF5 between Native Language Identification (NLI) (determining the native language of the author of a text) and Language Variety Identification (LVI) (classification of different varieties of a single language) which opens up more datasets. The Leipzig Corpora Collection BIBREF6, the Universal Declaration of Human Rights and Tatoeba are also often used sources of data.
The NCHLT text corpora BIBREF7 is likely a good starting point for a shared LID task dataset for the South African languages BIBREF8. The NCHLT text corpora contains enough data to have 3500 training samples and 600 testing samples of 300+ character sentences per language. Researchers have recently started applying existing algorithms for tasks like neural machine translation in earnest to such South African language datasets BIBREF9.
Existing NLP datasets, models and services BIBREF10 are available for South African languages. These include an LID algorithm BIBREF11 that uses a character level n-gram language model. Multiple papers have shown that 'shallow' naive Bayes classifiers BIBREF12, BIBREF8, BIBREF13, BIBREF14, SVMs BIBREF15 and similar models work very well for doing LID. The DSL 2017 paper BIBREF1, for example, gives an overview of the solutions of all of the teams that competed on the shared task and the winning approach BIBREF16 used an SVM with character n-gram, parts of speech tag features and some other engineered features. The winning approach for DSL 2015 used an ensemble naive Bayes classifier. The fasttext classifier BIBREF17 is perhaps one of the best known efficient 'shallow' text classifiers that have been used for LID .
Multiple papers have proposed hierarchical stacked classifiers (including lexicons) that would for example first classify a piece of text by language group and then by exact language BIBREF18, BIBREF19, BIBREF8, BIBREF0. Some work has also been done on classifying surnames between Tshivenda, Xitsonga and Sepedi BIBREF20. Additionally, data augmentation BIBREF21 and adversarial training BIBREF22 approaches are potentially very useful to reduce the requirement for data.
Researchers have investigated deeper LID models like bidirectional recurrent neural networks BIBREF23 or ensembles of recurrent neural networks BIBREF24. The latter is reported to achieve 95.12% in the DSL 2015 shared task. In these models text features can include character and word n-grams as well as informative character and word-level features learnt BIBREF25 from the training data. The neural methods seem to work well in tasks where more training data is available.
In summary, LID of short texts, informal styles and similar languages remains a difficult problem which is actively being researched. Increased confusion can in general be expected between shorter pieces of text and languages that are more closely related. Shallow methods still seem to work well compared to deeper models for LID. Other remaining research opportunities seem to be data harvesting, building standardised datasets and creating shared tasks for South Africa and Africa. Support for language codes that include more languages seems to be growing and discoverability of research is improving with more survey papers coming out. Paywalls also seem to no longer be a problem; the references used in this paper was either openly published or available as preprint papers.
In summary, LID of short texts, informal styles and similar languages remains a difficult problem which is actively being researched.
|
Is LID of short texts, informal styles, and similar languages still a difficult problem?
|
Yes.
|
null | false
| null |
Kenya has a diverse population that includes many of Africa's major ethnoracial and linguistic groups. Although there is no official list of Kenyan ethnic groups, the number of ethnic categories and sub-categories recorded in the country's census has changed significantly over time, expanding from 42 in 1969 to more than 120 in 2019. Most residents are Bantus (60%) or Nilotes (30%). Cushitic groups also form a small ethnic minority, as do Arabs, Indians, and Europeans.
According to the Kenya National Bureau of Statistics (KNBS), in 2019, Kenya had a total population of 47,564,296. The largest native ethnic groups were the Kikuyu (8,148,668), Luhya (6,823,842), Kalenjin (6,358,113), Luo (5,066,966), Kamba (4,663,910), Somali (2,780,502), Kisii (2,703,235), Mijikenda (2,488,691), Meru (1,975,869), Maasai (1,189,522), and Turkana (1,016,174). The North Eastern Province of Kenya, formerly known as NFD, is predominantly inhabited by the indigenous ethnic Somalis. Foreign-rooted populations include Arabs, Asians, and Europeans.
|
What are the three largest ethnic groups in Kenya?
|
The largest ethnic groups in Kenya are the Kikuyu, Luhya, and Kalenjin.
|
null | false
| null |
Name the novel for which Douglas Stuart won the Booker.
|
Shuggie Bain
|
|
null | false
| null |
Bacteria (/bækˈtɪəriə/ (listen); singular: bacterium) are ubiquitous, mostly free-living organisms often consisting of one biological cell. They constitute a large domain of prokaryotic microorganisms. Typically a few micrometres in length, bacteria were among the first life forms to appear on Earth, and are present in most of its habitats. Bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of Earth's crust. Bacteria are vital in many stages of the nutrient cycle by recycling nutrients such as the fixation of nitrogen from the atmosphere. The nutrient cycle includes the decomposition of dead bodies; bacteria are responsible for the putrefaction stage in this process. In the biological communities surrounding hydrothermal vents and cold seeps, extremophile bacteria provide the nutrients needed to sustain life by converting dissolved compounds, such as hydrogen sulphide and methane, to energy. Bacteria also live in symbiotic and parasitic relationships with plants and animals. Most bacteria have not been characterised and there are many species that cannot be grown in the laboratory. The study of bacteria is known as bacteriology, a branch of microbiology.
|
From the passage identify the places where Bacteria inhabits. Display the results in comma separated format.
|
soil, water, acidic hot springs, radioactive waste, deep biosphere of Earth's crust
|
null | false
| null |
Nagelsmann era (2021–2023)
Under new coach Julian Nagelsmann, Bayern have completed the feat of winning 10 consecutive Bundesliga titles following a 3–1 Der Klassiker win. However, the team unexpectedly lost to Villarreal in the Champions League quarter-finals, going out at that stage for the second year in a row. On 24 March 2023, Nagelsmann was released by Bayern and replaced by Thomas Tuchel, who received a contract until 30 June 2025.
|
Who is the next coach at Fc Bayern München to take over from Julian Nagelsmann?
|
Thomas Tuchel replaced Julian Nagelsmann in March 2023 and is now the new coach at FC Bayern Munich, and was previously the coach at Chelsea.
|
null | false
| null |
How can I get people to like me?
|
It's important to remember that not everyone will like you in this world, no matter how great you are. So don't be discouraged if you find someone doesn't like you as much as you think they should.
Here are some tips for getting people to like you:
- Ask them questions about themselves: People enjoy talking about themselves and if you ask them thoughtful questions they will see that you are a caring person who pays attention to them.
- Read social cues: It's important to be able to understand when someone is not interested in what you are talking about. If you find someone you're talking to is looking around the room and giving you short responses, you should try and steer the conversation in a different direction to see if you can engage them.
- Have a sense of humor: Laughing with other people is a great way to create bonds. You can try to be a little self-deprecating or just make innocent jokes about your situation. But be careful to not alienate people with humor if you don't know them well. Jokes about religion, politics, and other people might not go over well with some people.
|
|
null | false
| null |
Write a children's story about a baby called Arthur who weighed three times the normal weight for a boy his age
|
Once upon a time in a country far far away there was a baby called Arthur who was born into a loving family.
In many ways, this event was perfectly normal and wouldn't have attracted any special attention.
However, there was one big difference with this baby, and when I say big I mean BIG! This little baby was not so little after all, in fact Arthur was three times the weight of a normal baby.
Arthur was so heavy that his parents had to build a customised pram with a reinforced frame just to move him around. On top of this bathtime would take over one hour because they had to clean between all his rolls of fat.
Even though it could be harder to care for such a big baby Arthur's parents always felt so lucky and didn't mind the extra work they always said there was just more of him to love.
The end
|
|
2004.01894
| false
| null |
Evaluation of Representation Models ::: Experiments ::: Multimodal representation.
We combined textual and image representations in two simple ways. The first method is concatenation of the text and image representation (concat). Before concatenation we applied the L2 normalization to each of the modalities. The second method it to learn a common space for the two modalities before concatenation (project).
The projection of each modality learns a space of $d$-dimensions, so that $h_{1}, h_{2} \in \mathbb {R}^{d}$. Once the multimodal representation is produced ($h_{m}$) for the left and right pairs, vectors are directly plugged into the regression layers. Projections are learned end-to-end with the regression layers and the MSE as loss function.
Multimodal representation.
We combined textual and image representations in two simple ways. The first method is concatenation of the text and image representation (concat). Before concatenation we applied the L2 normalization to each of the modalities. The second method it to learn a common space for the two modalities before concatenation (project).
The projection of each modality learns a space of $d$-dimensions, so that $h_{1}, h_{2} \in \mathbb {R}^{d}$. Once the multimodal representation is produced ($h_{m}$) for the left and right pairs, vectors are directly plugged into the regression layers. Projections are learned end-to-end with the regression layers and the MSE as loss function.
|
What multimodal representations are used in the experiments?
|
The answers are shown as follows:
* The second method it to learn a common space for the two modalities before concatenation (project)
* The first method is concatenation of the text and image representation (concat)
|
null | false
| null |
What is Dividend?
|
If a company has profits to share out, it can pay a dividend. The shareholders get so much
dividend for each share they own.
|
|
2001.06888
| false
| null |
In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets.
In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets.
|
Which social media platform is explored?
|
The answers are shown as follows:
* twitter
|
null | false
| null |
What are some top things to do in Amsterdam?
|
1) Museums! Book these ASAP: Van Gogh and Rijksmuseum, Anne Frank Huis
2) Book a canal tour or just grab one near Rijksmuseum. If you want something quieter and more adventurous, book an electric boat e.g. Mokumboot and go around the canals on your own.
3) Grab a coffee from Rumbaba and walk around Jordaan in the morning. Find a good pastry shop (there are several!) and get the sights while you walk.
4) Foodhallen ... expensive but cool place to get dinner
5) Artis Zoo. A super cool zoo, plus you get to walk around De Plantage
|
|
null | false
| null |
In contrast to powder snow avalanches, wet snow avalanches are a low velocity suspension of snow and water, with the flow confined to the track surface (McClung, first edition 1999, page 108). The low speed of travel is due to the friction between the sliding surface of the track and the water saturated flow. Despite the low speed of travel (≈10–40 km/h), wet snow avalanches are capable of generating powerful destructive forces, due to the large mass and density. The body of the flow of a wet snow avalanche can plough through soft snow, and can scour boulders, earth, trees, and other vegetation; leaving exposed and often scored ground in the avalanche track. Wet snow avalanches can be initiated from either loose snow releases, or slab releases, and only occur in snowpacks that are water saturated and isothermally equilibrated to the melting point of water. The isothermal characteristic of wet snow avalanches has led to the secondary term of isothermal slides found in the literature (for example in Daffern, 1999, page 93). At temperate latitudes wet snow avalanches are frequently associated with climatic avalanche cycles at the end of the winter season, when there is significant daytime warming.
|
Using examples from the text give me a rundown of the causes of wet snow avalanches.
|
Wet snow avalanches can be initiated from either loose snow releases, or slab releases, and only occur in snowpacks that are water saturated and isothermally equilibrated to the melting point of water.
|
1807.07961
| true
| null |
FLOAT SELECTED: Table 1: Tweet examples with emojis. The sentiment ground truth is given in the second column. The examples show that inconsistent sentiments exist between emojis and texts.
We construct our own Twitter sentiment dataset by crawling tweets through the REST API which consists of 350,000 users and is magnitude larger comparing to previous work. We collect up to 3,200 tweets from each user and follow the standard tweet preprocessing procedures to remove the tweets without emojis and tweets containing less than ten words, and contents including the urls, mentions, and emails.
For acquiring the sentiment annotations, we first use Vader which is a rule-based sentiment analysis algorithm BIBREF17 for text tweets only to generate weak sentiment labels. The algorithm outputs sentiment scores ranging from -1 (negative) to 1 (positive) with neutral in the middle. We consider the sentiment analysis as a binary classification problem (positive sentiment and negative sentiment), we filter out samples with weak prediction scores within INLINEFORM0 and keep the tweets with strong sentiment signals. Emoji occurrences are calculated separately for positive tweets and negative tweets, and threshold is set to 2,000 to further filter out emojis which are less frequently used in at least one type of sentimental text. In the end, we have constructed a dataset with 1,492,065 tweets and 55 frequently used emojis in total. For the tweets with an absolute sentiment score over 0.70, we keep the auto-generated sentiment label as ground truth because the automatic annotation is reliable with high sentiment scores. On the other hand, we select a subset of the tweets with absolute sentiment scores between INLINEFORM1 for manual labeling by randomly sampling, following the distribution of emoji occurrences where each tweet is labeled by two graduate students. Tweets are discarded if the two annotations disagree with each other or they are labeled as neutral. In the end, we have obtained 4,183 manually labeled tweets among which 60% are used for fine-tuning and 40% are used for testing purposes. The remainder of the tweets with automatic annotations are divided into three sets: 60% are used for pre-training the bi-sense and conventional emoji embedding, 10% for validation and 30% are for testing. We do not include a “neutral” class because it is difficult to obtain valid neutral samples. For auto-generated labels, the neutrals are the samples with low absolute confidence scores and their sentiments are more likely to be model failures other than “true neutrals”. Moreover, based on the human annotations, most of the tweets with emojis convey non-neutral sentiment and only few neutral samples are observed during the manual labeling which are excluded from the manually labeled subset.
FLOAT SELECTED: Table 1: Tweet examples with emojis. The sentiment ground truth is given in the second column. The examples show that inconsistent sentiments exist between emojis and texts.
We construct our own Twitter sentiment dataset by crawling tweets through the REST API which consists of 350,000 users and is magnitude larger comparing to previous work. We collect up to 3,200 tweets from each user and follow the standard tweet preprocessing procedures to remove the tweets without emojis and tweets containing less than ten words, and contents including the urls, mentions, and emails.
For acquiring the sentiment annotations, we first use Vader which is a rule-based sentiment analysis algorithm BIBREF17 for text tweets only to generate weak sentiment labels.
|
Do they evaluate only on English datasets?
|
Yes.
|
null | false
| null |
Cabernet Sauvignon (French: [kabɛʁnɛ soviɲɔ̃]) is one of the world's most widely recognized red wine grape varieties. It is grown in nearly every major wine producing country among a diverse spectrum of climates from Australia and British Columbia, Canada to Lebanon's Beqaa Valley. Cabernet Sauvignon became internationally recognized through its prominence in Bordeaux wines, where it is often blended with Merlot and Cabernet Franc. From France and Spain, the grape spread across Europe and to the New World where it found new homes in places like California's Santa Cruz Mountains, Paso Robles, Napa Valley, New Zealand's Hawke's Bay, South Africa's Stellenbosch region, Australia's Margaret River, McLaren Vale and Coonawarra regions, and Chile's Maipo Valley and Colchagua. For most of the 20th century, it was the world's most widely planted premium red wine grape until it was surpassed by Merlot in the 1990s. However, by 2015, Cabernet Sauvignon had once again become the most widely planted wine grape, with a total of 341,000 hectares (3,410 km2) under vine worldwide
|
What is Cabernet Sauvignon?
|
Cabernet Sauvignon (French: [kabɛʁnɛ soviɲɔ̃]) is one of the world's most widely recognized red wine grape varieties.
|
2003.05995
| false
| null |
In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction. We then perform a data collection and compare our dataset to a similar dataset collected in a more controlled lab setting with a single Wizard BIBREF4 and discuss the advantages/disadvantages of both approaches. Finally, we present future work. Our contributions are as follows:
Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.
In this paper, we provide a brief survey of existing datasets and describe the CRWIZ framework for pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction
Dialogue structure: we introduced structured dialogues through a Finite State Machine (FSM) that controls the current dialogue state and offers multiple suitable and relevant state transitions (actions) to the Wizard depending on the point in the interaction, the state of the world and the history. A graph of dialogue states, transitions and utterances is loaded when the system is initialised, and each chat room has its own dialogue state, which changes through actions.
|
How is dialogue guided to avoid interactions that breach procedures and processes only known to experts?
|
The answers are shown as follows:
* pairing crowdworkers and having half of them acting as Wizards by limiting their dialogue options only to relevant and plausible ones, at any one point in the interaction
|
null | false
| null |
Apache Spark has built-in support for Scala, Java, SQL, R, and Python with 3rd party support for the .NET CLR, Julia and more.
|
What programming language does Spark support?
|
Apache Spark has built-in support for Scala, Java, SQL, R, and Python. It also supports 3rd party programming language such as .NET CLR, Julia and more.
|
null | false
| null |
The Mughal Empire was an early-modern empire that controlled much of Indian subcontinent between the 16th and 19th centuries. For some two hundred years, the empire stretched from the outer fringes of the Indus river basin in the west, northern Afghanistan in the northwest, and Kashmir in the north, to the highlands of present-day Assam and Bangladesh in the east, and the uplands of the Deccan Plateau in South India.It included present day areas of India, Pakistan, Afghanistan and Bangladesh.
The Mughal empire is conventionally said to have been founded in 1526 by Babur, a warrior chieftain from what is today Uzbekistan, who employed aid from the neighboring Safavid and Ottoman empires, to defeat the Sultan of Delhi, Ibrahim Lodi, in the First Battle of Panipat, and to sweep down the plains of North India. The Mughal imperial structure, however, is sometimes dated to 1600, to the rule of Babur's grandson, Akbar. This imperial structure lasted until 1720, shortly after the death of the last major emperor, Aurangzeb, during whose reign the empire also achieved its maximum geographical extent. Reduced subsequently to the region in and around Old Delhi by 1760, the empire was formally dissolved by the British Raj after the Indian Rebellion of 1857.
Although the Mughal empire was created and sustained by military warfare, it did not vigorously suppress the cultures and peoples it came to rule; rather it equalized and placated them through new administrative practices and diverse ruling elites, leading to more efficient, centralised, and standardized rule. The base of the empire's collective wealth was agricultural taxes, instituted by the third Mughal emperor, Akbar. These taxes, which amounted to well over half the output of a peasant cultivator, were paid in the well-regulated silver currency, and caused peasants and artisans to enter larger markets.
The relative peace maintained by the empire during much of the 17th century was a factor in India's economic expansion. The burgeoning European presence in the Indian Ocean, and its increasing demand for Indian raw and finished products, created still greater wealth in the Mughal courts. There was more conspicuous consumption among the Mughal elite, resulting in greater patronage of painting, literary forms, textiles, and architecture, especially during the reign of Shah Jahan. Among the Mughal UNESCO World Heritage Sites in South Asia are: Agra Fort, Fatehpur Sikri, Red Fort, Humayun's Tomb, Lahore Fort, Shalamar Gardens, and the Taj Mahal, which is described as "the jewel of Muslim art in India, and one of the universally admired masterpieces of the world's heritage."
|
How long did the Mughal empire last in India?
|
The Mughal empire was founded by Babur in 1526 and was dissolved by the British in 1857, a span of 331 years.
|
null | false
| null |
I would like to learn about data and ai. What would you recommend?
|
It depends how much you know today and how much you want to learn. the second aspect is for you to know how you learn best. you could register to a course at a university, you could find material online or you could connect to your personal network to find an expert who could help you take in all this new knowledge. In terms of knowledge blocks, you should understand about cloud, data source, data type, use case, data science, data management, orchestration, data quality, data mesh, data fabric and bi.
|
|
null | false
| null |
What is the best time to study
|
Morning. Because your mind will be fresh
|
|
null | false
| null |
What is the name of the meme crypto coin that Elon Musk made famous?
|
Elon Musk brought attention to the crypto currency Dogecoin by tweeting the word "One word: Doge". This brought the coin into meme status with its value starting at $0.0047 and skyrocketing to $0.7376 by May 2021.
|
|
null | false
| null |
Which is a species of fish? Sneaker or Sole
|
Sole
|
|
null | false
| null |
What NFL team was moved out of Ohio and became the Baltimore Ravens in 1996?
|
The owner of the Cleveland Browns, Art Modell, decided to relocate the team from Cleveland to Baltimore. However, a settlement decided that the titles, history, and colors would stay in Cleveland and the personnel and team were moved to Baltimore. Cleveland was without a football team from 1996 until replacements were established in 1999.
|
|
1911.09419
| false
| null |
We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27. Details of these datasets are summarized in Table TABREF18.
We evaluate our proposed models on three commonly used knowledge graph datasets—WN18RR BIBREF26, FB15k-237 BIBREF18, and YAGO3-10 BIBREF27.
|
What benchmark datasets are used for the link prediction task?
|
The answers are shown as follows:
* WN18RR
* FB15k-237
* YAGO3-10
|
null | false
| null |
Write a poem or journal entry about numbers that have special meaning to you.
|
My special number is 2. When I was a child, I would get up early every Saturday morning to go to football training. To be honest with you, I hated the training. I hated the cold, I hated the rain, I hated the mud, and I was never very good at football.
But I will always remember feeling extremely proud playing a game in summertime. The game was in Slough, and I was playing for a team called Old Windsor Tigers. I wore the number 2 shirt, and had an incredible game, playing centre half. That was the only Man of the Match award I ever won. I'm still terrible at football, but I love the sport as a spectator, and that memory will always be special to me.
|
|
null | false
| null |
Provide me with a list of boy names that are currently popular.
|
Boy names that are currently popular include Liam, Noah, Oliver, Elijah, James, William, and Benjamin.
|
|
null | false
| null |
What are common executive roles at large companies?
|
Large companies contain different sets of executives. Most companies have a chief executive officer who runs the company. Another common executive title is the Chief Financial Officer, the person responsible for accounting, financial planning and reporting. Chief Technology Officers are generally responsible for building and maintaining technology infrastructure, the code base and driving innovation within the organization. The Chief Marketing Officer is generally responsible for marketing, brand awareness and demand generation within the organization. A Chief People Officer is often responsible for human resources, recruitment and talent management. A Chief Security Officer protects the safety and security of the organization.
|
|
null | false
| 98
|
In general, building corpus is carried out through four stages: (1) choose target of corpus and source of raw data; (2) building a guideline based on linguistics knowledge for annotation; (3) annotating or tagging corpus based on rule set in the guideline; and (4) reviewing corpus to check the consistency issue.
Encoding word segmentation corpus using B-I-O tagset can be applied, where B, I, and O denoted begin of word, inside of word, and others, respectively. For example, the sentence “Megabit trên giây là đơn vị đo tốc đọ truyền dẫn dữ liệu ." (”Megabit per second is a unit to measure the network traffic.” in English) with the word boundary result “Megabit trên giây là đơn_vị đo tốc_độ truyền_dẫn dữ_liệu ." is encoded as “Megabit/B trên/B giây/B là/B đơn/B vị/I đo/B tốc/B độ/I truyền/B dẫn/I dữ/B liệu/I ./O" .
Annotation guidelines can be applied to ensure that annotated corpus has less errors because the manual annotation is applied. Even though there are guidelines for annotating, the available output corpora are still inconsistent. For example, for the Vietnamese Treebank corpus of the VLSP project, Nguyen et al. listed out several Vietnamese word segmentation inconsistencies in the corpus based on POS information and n-gram sequences BIBREF14 .
Currently, there are at least three available word segmentation corpus used in Vietnamese word segmentation studies and systems. Firstly, Dinh et al. built the CADASA corpus from CADASA’s books BIBREF15 . Secondly, Nguyen et al. built vnQTAG corpus from general news articles BIBREF7 . More recently, Ngo et al. introduced the EVBCorpus corpus, which is collected from four sources, news articles, books, law documents, and novels. As a part of EVBCorpus, EVBNews, was annotated common tags in NLP, such as word segmentation, chunker, and named entity BIBREF16 . All of these corpora are collected from news articles or book stories, and they are manually annotated the word boundary tags (as shown in Table I).
Currently, there are at least three available word segmentation corpus used in Vietnamese word segmentation studies and systems.
|
Is there any word segmentation for Vietnamese available at the moment?
|
Yes.
|
null | false
| null |
Tell me which one of the following sports are Olympic sports: karate, snooker, wrestling, tai boxing, football, basketball.
|
Karate, freestyle and greco roman wrestling, boxing, football and basketball are Olympic sports.
|
|
null | false
| 148
|
In the context of QA, the goal of RQE is to retrieve answers to a new question by retrieving entailed questions with associated answers. We therefore define question entailment as:
a question INLINEFORM0 entails a question INLINEFORM1 if every answer to INLINEFORM2 is also a complete or partial answer to INLINEFORM3 .
We present below two examples of consumer health questions INLINEFORM0 and entailed questions INLINEFORM1 :
Example 1 (each answer to the entailed question B1 is a complete answer to A1):
A1: What is the latest news on tennitis, or ringing in the ear, I am 75 years old and have had ringing in the ear since my mid 5os. Thank you.
B1: What is the latest research on Tinnitus?
Example 2 (each answer to the entailed question B2 is a partial answer to A2):
A2: My mother has been diagnosed with Alzheimer's, my father is not of the greatest health either and is the main caregiver for my mother. My question is where do we start with attempting to help our parents w/ the care giving and what sort of financial options are there out there for people on fixed incomes.
B2: What resources are available for Alzheimer's caregivers?
The inclusion of partial answers in the definition of question entailment also allows efficient relaxation of the contextual constraints of the original question INLINEFORM0 to retrieve relevant answers from entailed, but less restricted, questions.
In the context of QA, the goal of RQE is to retrieve answers to a new question by retrieving entailed questions with associated answers. We therefore define question entailment as: • a question A entails a question B if every answer to B is also a complete or partial answer to A.
|
How to define question entailment by the authors?
|
A question A entails a question B if every answer to B is also a complete or partial answer to A.
|
null | false
| 325
|
Training a dialogue agent to converse like a human requires extensive supervision. The most common approach is to train models to imitate humans in large corpora of crowdsourced or scraped conversations BIBREF0 . These fully-supervised conversations tend to be expensive to collect in sufficient quantity and/or occur in settings with significant differences from the deployment environment BIBREF1 . Instead, dialogue agents would ideally learn directly from dialogue, the conversations they participate in after deployment, which are usually abundant, task-specific, dynamic, and cheap. This corresponds to the way humans learn to converse—not merely observing others engaging in “expert-level” conversations, but instead actively adjusting and correcting our speech based on feedback woven throughout our own conversations BIBREF2 , BIBREF3 . Giving a dialogue agent this ability would enable it to continuously improve and adapt over its lifetime, rather than requiring additional annotation costs for each and every improvement.
However, naively training a dialogue agent on its own conversations yields poor results. For example, training a model on its own output can simply reinforce its existing failure modes, and mistakes by the agent can lead to absurd conversations that no longer resemble the target domain BIBREF4 . To combat this, one approach is to allow the agent to request feedback during conversations BIBREF5 , BIBREF6 , e.g., when it believes it is about to make a mistake. This approach, however, falls victim to the Dunning-Kruger effect BIBREF7 , which in this case suggests that a bad model will also be bad at knowing when it is doing a bad job. Regardless of when feedback is requested, existing methods typically require accompanying scalar rewards or adherence to particular templates or structure to ensure that the feedback is usable by the model BIBREF8 , BIBREF9 , BIBREF10 . These requirements may be acceptable for paid annotators, but they impose unnatural workflows on unpaid conversation partners in a standard dialogue environment. Humans are able to request and provide feedback using only natural language; ideally, dialogue agents would be able to do the same.
In this work we propose the self-feeding chatbot, a dialogue agent with the ability to extract new examples from the conversations it participates in after deployment (Figure FIGREF1 ). Concretely, in addition to being trained on the primary Dialogue task, the agent is trained to predict its speaking partner's satisfaction with its responses. When the conversation seems to be going well, the user's responses (but not the bot's own utterances) become the targets in new training examples for the Dialogue task. When the agent believes it has made a mistake, it instead requests feedback on what it could have said instead. Predicting the feedback that will be provided in a given context becomes an auxiliary task (Feedback) on which the model is also trained. Importantly, these new examples improve the agent's dialogue abilities while using only natural responses from the user that do not require special structure, accompanying numerical feedback, or additional human intervention in order to be used.
With this approach, the conversations the chatbot participates in are sliced into two complementary datasets—one largely protected from the chatbot's mistakes (Dialogue examples), and one which directly addresses them (Feedback examples). We validate our approach on the PersonaChat BIBREF11 dialogue dataset, finding empirically that regardless of the number of available supervised examples, the dialogue ability of the chatbot is always improved by adding the automatically extracted examples of either type, and improves the most by adding both.
The main contributions of this work thus include the following:
The datasets and models described in this paper are available via the ParlAI platform BIBREF12 , along with training code. Hyperparameter values are included in Appendix SECREF13 .
The most common approach is to train models to imitate humans in large corpora of crowdsourced or scraped conversations
|
What is the most common approach to training a dialogue agent?
|
The most common approach is to train models to imitate humans in large corpora of crowdsourced or scraped conversations.
|
null | false
| null |
What is a mulligan in Magic the Gathering?
|
A mulligan in Magic the Gathering is an action you can take if you are not happy with your opening hand in a game. Depending on the format you play, different rules apply. For competitive one versus one formats, the rule is that you draw an initial hand of seven cards. If you are unhappy with the hand, you can shuffle it back in your deck, and again draw a hand of seven cards. If you are then happy with the hand, you can keep it but you have to choose and put one card from your hand and put it on the bottom of the deck. If you are unhappy, you can mulligan again, but then you'll have to put back two cards, and so forth.
|
|
null | false
| null |
Are UGGs considered fashionable?
|
UGGs were originally worn in Australia by surfers and in the 90’s became mainstream. However since then, they’ve became unpopular and even described as “basic girl” footwear. Recently, their popularity has had a resurgence. While the question of whether they are fashionable is largely debated, the comfort they provide is undeniable.
|
|
null | false
| null |
Traditional dried fruit such as raisins, figs, dates, apricots and apples have been a staple of Mediterranean diets for millennia. This is due partly to their early cultivation in the Middle Eastern region known as the Fertile Crescent, made up by parts of modern Iran, Iraq, southwest Turkey, Syria, Lebanon, Palestine, Israel, and northern Egypt. Drying or dehydration also happened to be the earliest form of food preservation: grapes, dates, and figs that fell from the tree or vine would dry in the hot sun. Early hunter-gatherers observed that these fallen fruit took on an edible form, and valued them for their stability as well as their concentrated sweetness.
The earliest recorded mention of dried fruits can be found in Mesopotamian tablets dating to about 1500 BC, which contain what are probably the oldest known written recipes. These clay slabs, written in Akkadian, the daily language of Babylonia, were inscribed in cuneiform and tell of diets based on grains (barley, millet, wheat), vegetables and fruits such as dates, figs, apples, pomegranates, and grapes. These early civilizations used dates, date juice evaporated into syrup and raisins as sweeteners. They included dried fruits in their breads for which they had more than 300 recipes, from simple barley bread for the workers to very elaborate, spiced cakes with honey for the palaces and temples.
The date palm was one of the first cultivated trees. It was domesticated in Mesopotamia more than 5,000 years ago. It grew abundantly in the Fertile Crescent and it was so productive (an average date palm produces 50 kg (100 lbs) of fruit a year for 60 years or more) that dates were the cheapest of staple foods. Because they were so valuable, they were well recorded in Assyrian and Babylonian monuments and temples. The villagers in Mesopotamia dried them and ate them as sweets. Whether fresh, soft-dried or hard-dried, they helped to give character to meat dishes and grain pies. They were valued by travelers for their energy and were recommended as stimulants against fatigue.
Figs were also prized in early Mesopotamia, Palestine, Israel, and Egypt where their daily use was probably greater than or equal to that of dates. As well as appearing in wall paintings, many specimens have been found in Egyptian tombs as funerary offerings. In Greece and Crete, figs grew very readily and they were the staple of poor and rich alike, particularly in their dried form.
Grape cultivation first began in Armenia and the eastern regions of the Mediterranean in the 4th century BC. Raisins were produced by drying grapes in the hot desert sun. Very quickly, viticulture and raisin production spread across northern Africa including Morocco and Tunisia. The Phoenicians and the Egyptians popularized the production of raisins, probably due to the perfect arid environment for sun drying. They put them in jars for storage and allotted them to the different temples by the thousands. They also added them to breads and various pastries, some made with honey, some with milk and eggs.
From the Middle East, these fruits spread through Greece to Italy where they became a major part of the diet. Ancient Romans consumed raisins in spectacular quantities and at all levels of society, including them as a key part of their common meals, along with olives and fresh fruits. Raisined breads were common for breakfast and were consumed with their grains, beans, and cultured milks. Raisins were so valued that they transcended the food realm and became rewards for successful athletes, as well as premium barter currency.
Having dried fruits was essential in ancient Rome as these instructions for housekeepers around 100 BC tell: "She must keep a supply of cooked food on hand for you and the servants. She must keep many hens and have plenty of eggs. She must have a large store of dried pears, sorbs, figs, raisins, sorbs in must, preserved pears and grapes and quinces. She must also keep preserved grapes in grape-pulp and in pots buried in the ground, as well as fresh Praenestine nuts kept in the same way, and Scantian quinces in jars, and other fruits that are usually preserved, as well as wild fruits. All these she must store away diligently every year."
Figs were also extremely popular in Rome. Dried figs were added to bread and formed a major part of the winter food of country people. They were rubbed with spices such as cumin, anise and fennel seeds, or toasted sesame, wrapped in fig leaves and stored in jars. Today, major producing regions include Israel, Jerusalem, Gaza and many other Arabic countries. Dried figs are rich in vitamins, phosphorus and various other important minerals.
Plums, apricots and peaches had their origins in Asia. They were domesticated in China in the 3rd millennium BC and spread to the Fertile Crescent where they were also very popular, fresh and dried alike. They arrived in Greece and Italy much later and were very expensive but valued in the preparation of gourmet dishes with port or stewed with honey and spices.
|
How did early civilizations use dates and dried fruits?
|
Early civilizations used dates, date juice evaporated into syrup and raisins as sweeteners. They included dried fruits in their breads for which they had more than 300 recipes, from simple barley bread for the workers to very elaborate, spiced cakes with honey for the palaces and temples.
|
null | false
| null |
What is the largest state in the US?
|
Alaska
|
|
null | false
| null |
Starting around 2005, the widespread use of peer-to-peer applications such as BitTorrent was a serious concern to many network operators, as the massive amounts of network traffic caused by these applications had a significant impact on traffic engineering and revenues. Some network operators tried to throttle this traffic.
In May 2008, in an IETF Workshop on Peer-to-Peer Infrastructure, several areas of work were identified:
1. A standardized interface for the exchange of information between the underlying IP network and an overlay network, such as a peer-to-peer network. The basic idea is, that if the overlay network was aware of the topology and the cost for sending traffic through the underlying IP network, it could optimize decisions with respect to the overlay network's topology (e.g., peer selection) and routing of traffic through the overlay network. The result would be better performance or Quality of Experience in the application while reducing the utilization of the underlying network infrastructure. This work item led to the establishment of the IETF ALTO working group.
2. Content caches in the network. This has been studied in the IETF DECADE working group. However, no new protocol has been developed and standardized.
3. A new congestion control mechanism in the transport layer for background traffic, which "yields" to standard TCP. This was worked on in the IETF LEDBAT working group and has been standardized in RFC 6817.
4. A new DiffServ code point to mark IP packets to have a lower priority than the default "best effort" category has been standardized in RFC 8622.
The IETF ALTO working group was established in November 2008. The first deliverables were the problem statement, the requirements document, the specification of the core ALTO protocol and an ALTO server discovery mechanism. Since then, various extensions have been specified (see below) or are still work in progress (see IETF ALTO Datatracker).
Originally designed to support peer-to-peer file sharing, the concept is broadly applicable to many network problems. However, as of 2021 it has not achieved widespread deployment in the internet. Nevertheless, there have been experiments in Internet service provider (ISP) networks and a deployment to support large data transfers for the Large Hadron Collider at CERN.
|
What has the IETF ALTO working group accomplished?
|
The IETF ALTO working group was formed to solve the problems caused by the widespread use of peer-to-peer applications such as BitTorrent. The working group has provided a problem statement, requirements document, and the specifications of the core ALTO protocol and ALTO server discovery mechanism. There is still no widespread adoption of the ALTO protocol as of 2021 although some Internet service provider (ISP) networks have experimented with it, and CERN has deployed it to support large data transfers for the Large Hadron Collider.
|
null | false
| null |
WordPress (WP or WordPress.org) is a free and open-source content management system (CMS) written in hypertext preprocessor (PHP) language and paired with a MySQL or MariaDB database with supported HTTPS. Features include a plugin architecture and a template system, referred to within WordPress as "Themes". WordPress was originally created as a blog-publishing system but has evolved to support other web content types including more traditional mailing lists and Internet fora, media galleries, membership sites, learning management systems (LMS) and online stores.
|
What is WordPress?
|
WordPress is a Content Management System (CMS) written in hypertext preprocessor (PHP). It uses either a MySQL or MariaDB database and has a robust library of WordPress plugins and themes for customization.
|
null | false
| 277
|
AliMe Bot is a kind of retrieval-based online service of E-commerce which collects a lot of predefined question-answering pairs. Through data analysis, we find that many variants of a question exist which means a sort of questions can correspond to a same answer. Based on the observation, naturally, we can view these questions with the same answer as a bag. Obviously, the bag contains diverse expressions of a question, which may provide more matching evidence than only one question due to the rich information contained in the bag. Motivated by the fact, different from existing query-question (Q-Q) matching method, we propose a new query-bag matching approach for retrieval-based chatbots. Concretely, when a user raises a query, the query-bag matching model provides the most suitable bag and returns the corresponding answer of the bag. To our knowledge, there is no query-bag matching study exists, and we focus on the new approach in this paper.
Recalling the text matching task BIBREF0, recently, researchers have adopted the deep neural network to model the matching relationship. ESIM BIBREF1 judges the inference relationship between two sentences by enhanced LSTM and interaction space. SMN BIBREF2 performs the context-response matching for the open-domain dialog system. BIBREF3 BIBREF3 explores the usefulness of noisy pre-training in the paraphrase identification task. BIBREF4 BIBREF4 surveys the methods in query-document matching in web search which focuses on the topic model, the dependency model, etc. However, none of them pays attention to the query-bag matching which concentrates on the matching for a query and a bag containing multiple questions.
When a user poses a query to the bot, the bot searches the most similar bag and uses the corresponding answer to reply to the user. The more information in the query covered by the bag, the more likely the bag's corresponding answer answers the query. What's more, the bag should not have too much information exceeding the query. Thus modelling the bag-to-query and query-to-bag coverage is essential in this task.
In this paper, we propose a simple but effective mutual coverage component to model the above-mentioned problem. The coverage is based on the cross-attention matrix of the query-bag pair which indicates the matching degree of elements between the query and bag. The mutual coverage is performed by stacking the cross-attention matrix along two directions, i.e., query and bag, in the word level respectively. In addition to the mutual coverage, a bag representation in word level is issued to help discover the main points of a bag. The bag representation then provides new matching evidence to the query-bag matching model.
We conduct experiments on the AliMe and Quora dataset for the query-bag matching based information-seeking conversation. Compared with baselines, we verify the effectiveness of our model. Our model obtains 0.05 and 0.03 $\text{R}_{10}@1$ gains comparing to the strongest baseline in the two datasets. The ablation study shows the usefulness of the components. The contributions in this paper are summarized as follows: 1) To the best of our knowledge, we are the first to adopt query-bag matching in the information-seeking conversation. 2) We propose the mutual coverage model to measure the information coverage in the query-bag matching. 3) We release the composite Quora dataset to facilitate the research in this area.
y. In addition to the mutual coverage, a bag representation in word level is issued to help discover the main points of a bag. The bag representation then provides new matching evidence to the query-bag matching model.
|
What do they issue to help discover the main points of a bag
|
A bag representation in word level.
|
null | false
| null |
5 steps to make a fire
|
Step 1: Clear a Circular Space on the Ground. ...
Step 2: Line the Circular Space With Large Rocks. ...
Step 3: Place the Small Sticks and Twigs in the Circular Space As a Platform. ...
Step 4: Place Dry Grass, and Leaves on the Kindling. ...
Step 5: Light the Fire With a Lighter or Matches.
|
|
null | false
| 513
|
1 # Define slots 2 slot(s1). slot(s2). slot(s3). slot(s4). 3 # Define identifiers for the objects in the image 4 # (there are up to four objects in one image). 5 obj(o1). obj(o2). obj(o3). obj(o4). 6 # Assign each slot to an object identifier 7 {assign_one_slot_to_one_object(X, O): slot(X)}=1 :- obj(O). 8 # Make sure the matching is one-to-one between slots 9 # and objects identifiers. 10 :- assign_one_slot_to_one_object(X1, O1), 11 assign_one_slot_to_one_object(X2, O2), 12 X1==X2, O1!=O2. 13 # Define all Neural-Probabilistic Predicates 14 npp(color_attr(X), [red, blue, green, grey, brown, 15 magenta, cyan, yellow, bg]) :- slot(X). 16 npp(shape_attr(X), [circle, triangle, square, bg]) :- slot(X). 17 npp(shade_attr(X), [bright, dark, bg]) :- slot(X). 18 npp(size_attr(X), [big,small,bg]) :- slot(X). 19 # Object O has the attributes C and S and H and Z if ... 20 has_attributes(O, C, S, H, Z) :- slot(X), obj(O), 21 assign_one_slot_to_one_object(X, O), 22 color(+X, -C), shape(+X, -S), 23 shade(+X, -H), size(+X, -Z).
|
Evaluation can you give short info on dataset size at the beginning?
|
The size of the data set is always determined by the queries in SLASH. These are attached to the source code in Appendix B. In MNIST Addition, there are always two images involved in one question. Consequently, the training dataset was 30000 in total size and the test dataset of 5000. For ShapeWorld4, one query was asked for each shape, resulting in maximally 80000 questions for training (20000 images) and maximally 20000 (5000 images) for testing in total. We have included this information in our new version.
|
null | false
| null |
What is a Husky?
|
A Husky is a type of dog.
|
|
null | false
| null |
What does a typical financial planning and analysis team structure consist of?
|
Typically, a financial planning and analysis team will consist of a financial analyst, senior financial analyst, finance manager, director, and VP. This team will usually fall under the chief financial officer for the company.
|
|
null | false
| null |
How would the world look like in say 200 years from now?
|
What I envision will happen in the future, say, in a 100 years is that we will have a world with all self sustaining technology. We will be harnessing power easily from abundant natural resources inside and outside of the solar system. All work will be automated and done by machines. Human labor will not be required anymore, be it in the factories or service industry or even in the medical fields. The elite class or the top 1% rich in the world will own all of the corporations and run it. They will not need the rest of the 99% of people in the world as they do will not require their services to run factories and corporations and make them profitable. The elite will see them including any existing forms of their government as a liability. There would be absolutely no functioning government in any part of the world and access to the elite cities or areas will be cut off from the rest of the population. The only people saved would be famous artists or athletes or anyone with creative skills. Rest of the people will be left to fend for themselves and all the progress made as a civilized society will be lost. Whomever is able to survive will set up small communities and continue life. But life will be hard for lack of food and medical care. Some communities may attempt to take out the elite cities but will fail. Eventually the 99% would perish after a few hundred years. The elites will have problems of their own like fights between corporations. But almost all of them will survive for generations and several thousands of years until some catastrophic event wipes them all out be it an alien invasion or something from natural causes.
|
|
null | false
| null |
What are the best social media sites
|
Twitter, facebook, tiktok, snapchat
|
|
null | false
| null |
Classify each of the following as being part of a car, house, or both: window, steering wheel, air conditioner, couch, kitchen, engine
|
Window: Both
Steering Wheel: Car
Air Conditioner: Both
Couch: Car
Kitchen: House
Engine: Car
|
|
null | false
| null |
I need to purchase food at the grocery store but I dont have a car to get there. What are some of the other ways I could get the food I need?
|
I could use a taxi service like Uber or Lyft to take me there and back.
I can ask a friend to give me a ride in their car.
I can walk to the store if it is a reasonable distance.
I could ride a bike or a scooter.
I can use a food delivery service like Instacart to deliver my groceries to my house.
I could take public transportation like a bus or the subway, if available in my area.
|
|
null | false
| 76
|
In this work, we have proposed an automated approach for the novel task of suggesting news articles to Wikipedia entity pages to facilitate Wikipedia updating. The process consists of two stages. In the first stage, article–entity placement, we suggest news articles to entity pages by considering three main factors, such as entity salience in a news article, relative authority and novelty of news articles for an entity page. In the second stage, article–section placement, we determine the best fitting section in an entity page. Here, we remedy the problem of incomplete entity section profiles by constructing section templates for specific entity classes. This allows us to add missing sections to entity pages. We carry out an extensive experimental evaluation on 351,983 news articles and 73,734 entities coming from 27 distinct entity classes. For the first stage, we achieve an overall performance with P=0.93, R=0.514 and F1=0.676, outperforming our baseline competitors significantly. For the second stage, we show that we can learn incrementally to determine the correct section for a news article based on section templates. The overall performance across different classes is P=0.844, R=0.885 and F1=0.860.
In the future, we will enhance our work by extracting facts from the suggested news articles. Results suggest that the news content cited in entity pages comes from the first paragraphs. However, challenging task such as the canonicalization and chronological ordering of facts, still remain.
challenging task such as the canonicalization and chronological ordering of facts, still remain.
|
What's the downside of this study?
|
Challenging task such as the canonicalization and chronological ordering of facts, still remain.
|
null | false
| null |
How many types of bass are there?
|
There are two main types of bass: the electric bass and the acoustic bass. There are also various other types of bass, such as the upright bass, the fretless bass, and the double bass.
|
|
null | false
| 365
|
Words with multiple senses commonly exist in many languages. For example, the word bank can either mean a “financial establishment” or “the land alongside or sloping down to a river or lake”, based on different contexts. Such a word is called a “polyseme”. The task to identify the meaning of a polyseme in its surrounding context is called word sense disambiguation (WSD). Word sense disambiguation is a long-standing problem in natural language processing (NLP), and has broad applications in other NLP problems such as machine translation BIBREF0 . Lexical sample task and all-word task are the two main branches of WSD problem. The former focuses on only a pre-selected set of polysemes whereas the later intends to disambiguate every polyseme in the entire text. Numerous works have been devoted in WSD task, including supervised, unsupervised, semi-supervised and knowledge based learning BIBREF1 . Our work focuses on using supervised learning to solve all-word WSD problem.
Most supervised approaches focus on extracting features from words in the context. Early approaches mostly depend on hand-crafted features. For example, IMS by BIBREF2 uses POS tags, surrounding words and collections of local words as features. These approaches are later improved by combining with word embedding features BIBREF0 , which better represents the words' semantic information in a real-value space. However, these methods neglect the valuable positional information between the words in the sequence BIBREF3 . The bi-directional Long-Short-Term-Memory (LSTM) approach by BIBREF3 provides one way to leverage the order of words. Recently, BIBREF4 improved the performance by pre-training a LSTM language model with a large unlabelled corpus, and using this model to generate sense vectors for further WSD predictions. However, LSTM significantly increases the computational complexity during the training process.
The development of the so called “fixed-size ordinally forgetting encoding” (FOFE) has enabled us to consider more efficient method. As firstly proposed in BIBREF5 , FOFE provides a way to encode the entire sequence of words of variable length into an almost unique fixed-size representation, while also retain the positional information for words in the sequence. FOFE has been applied to several NLP problems in the past, such as language model BIBREF5 , named entity recognition BIBREF6 , and word embedding BIBREF7 . The promising results demonstrated by the FOFE approach in these areas inspired us to apply FOFE in solving the WSD problem. In this paper, we will first describe how FOFE is used to encode sequence of any length into a fixed-size representation. Next, we elaborate on how a pseudo language model is trained with the FOFE encoding from unlabelled data for the purpose of context abstraction, and how a classifier for each polyseme is built from context abstractions of its labelled training data. Lastly, we provide the experiment results of our method on several WSD data sets to justify the equivalent performance as the state-of-the-art approach.
To demonstrate the applicability of our approach, we apply topic modelling to msmarco — a comprehension data set based on internet search queries — and collect examples that belong to a number of salient topics, producing 6 small to medium sized RC data sets for the following domains: biomedical, computing, film, finance, law and music.
|
What domains do they produce medium-sized RC data sets for?
|
Biomedical, computing, film, finance, law and music.
|
null | false
| null |
Name some famous cartoon cats.
|
Some famous cartoon cats include Heathcliff, Garfield, Tom, Sylvester, Felix, and the Cheshire Cat.
|
|
null | false
| 62
|
Neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 has enabled end-to-end training of a translation system without needing to deal with word alignments, translation rules, and complicated decoding algorithms, which are the characteristics of phrase-based statistical machine translation (PBSMT) BIBREF3 . Although NMT can be significantly better than PBSMT in resource-rich scenarios, PBSMT performs better in low-resource scenarios BIBREF4 . Only by exploiting cross-lingual transfer learning techniques BIBREF5 , BIBREF6 , BIBREF7 , can the NMT performance approach PBSMT performance in low-resource scenarios.
However, such methods usually require an NMT model trained on a resource-rich language pair like French INLINEFORM0 English (parent), which is to be fine-tuned for a low-resource language pair like Uzbek INLINEFORM1 English (child). On the other hand, multilingual approaches BIBREF8 propose to train a single model to translate multiple language pairs. However, these approaches are effective only when the parent target or source language is relatively resource-rich like English (En). Furthermore, the parents and children models should be trained on similar domains; otherwise, one has to take into account an additional problem of domain adaptation BIBREF9 .
In this paper, we work on a linguistically distant and thus challenging language pair Japanese INLINEFORM0 Russian (Ja INLINEFORM1 Ru) which has only 12k lines of news domain parallel corpus and hence is extremely resource-poor. Furthermore, the amount of indirect in-domain parallel corpora, i.e., Ja INLINEFORM2 En and Ru INLINEFORM3 En, are also small. As we demonstrate in Section SECREF4 , this severely limits the performance of prominent low-resource techniques, such as multilingual modeling, back-translation, and pivot-based PBSMT. To remedy this, we propose a novel multistage fine-tuning method for NMT that combines multilingual modeling BIBREF8 and domain adaptation BIBREF9 .
We have addressed two important research questions (RQs) in the context of extremely low-resource machine translation (MT) and our explorations have derived rational contributions (CTs) as follows:
To the best of our knowledge, we are the first to perform such an extensive evaluation of extremely low-resource MT problem and propose a novel multilingual multistage fine-tuning approach involving multilingual modeling and domain adaptation to address it.
and our explorations have derived rational contributions (CTs) as follows: To the best of our knowledge, we are the first to perform such an extensive evaluation of extremely low-resource MT problem and propose a novel multilingual multistage fine-tuning approach involving multilingual modeling and domain adaptation to address it.
|
What approach do the authors propose to solve such a problem?
|
A novel multilingual multistage fine-tuning approach involving multilingual modeling and domain adaptation.
|
null | false
| null |
While specific rules for triathlon can vary depending on the governing body (e.g. World Triathlon, USA Triathlon), as well as for an individual race venue, there are some basic universal rules. Traditionally, triathlon is an individual sport and each athlete is competing against the course and the clock for the best time. As such, athletes are not allowed to receive assistance from anyone else outside the race, with the exception of race-sanctioned aid volunteers who distribute food and water on the course.
Transition area (bicycles) of Hamburg Triathlon, 2002
Triathlons are timed in five sequential sections:
from the start of the swim to the beginning of the first transition (swim time);
from the beginning of the first transition to the end of the first transition (T1 time);
from the start of the cycling to the end of the cycling leg (cycling time);
from the beginning of the second transition to the end of the second transition (T2 time);
finally from the start of the run to the end of the run, at which time the triathlon is completed.
Results are usually posted on official websites and will show for each triathlete his/her swim time; cycle time (with transitions included); run time; and total time. Some races also post transition times separately.
Other rules of triathlon vary from race to race and generally involve descriptions of allowable equipment (for example, wetsuits are allowed in USAT events in the swimming stage of some races when the water temperature is below 25.6 °C (78.1 °F)), and prohibitions against interference between athletes. Additionally, the use of flippers or other swim propulsion and flotation aids are illegal in triathlon and can result in disqualification.
One rule involving the cycle leg is that the competitor's helmet must be donned before the competitor mounts (or even takes possession of, in certain jurisdictions) the bike, and it must remain on until the competitor has dismounted; the competitor may not be required to wear the helmet when not on the bicycle (e.g. while repairing a mechanical problem). Failure to comply with this rule will result in disqualification. Additionally, while on the bike course, all bicycles shall be propelled only by human force and human power. Other than pushing a bicycle, any propulsive action brought on by use of the hands is prohibited. Should a competitor's bike malfunction they can proceed with the race as long as they are doing so with their bicycle in tow. There are also strict rules regarding the 'bike mount' line. Competitors may not begin riding their bicycle out of transition until they are over a clearly marked line. Mounting the bike prior to this may incur a penalty (example: a 15-second time penalty at the London 2012 Olympics was imposed on Jonathan Brownlee, a competitor from Great Britain, for mounting his bike too early.)
Other time penalties can be incurred during the race for, among other things, drafting on the bike in a non-drafting race, improper passing, littering on course, and unsportsmanlike conduct.
|
Extract the basic universal competition sections of the triathlon in order, and explain each section.
|
Triathlons are timed in five sequential sections.
From the start of the swim to the beginning of the first transition (swim time). The use of flippers or other swim propulsion and flotation aids are illegal.
From the beginning of the first transition to the end of the first transition (T1 time).
From the start of the cycling to the end of the cycling leg (cycling time). The competitor's helmet must be donned before the competitor mounts the bike, and it must remain on until the competitor has dismounted.
From the beginning of the second transition to the end of the second transition (T2 time)
Finally from the start of the run to the end of the run, at which time the triathlon is completed.
|
null | false
| null |
Who wrote Democracy in America?
|
Alexis de Tocqueville wrote Democracy in America
|
|
null | false
| null |
Quentin Jerome Tarantino (/ˌtærənˈtiːnoʊ/; born March 27, 1963) is an American film director, writer, producer, and actor. His films are characterized by stylized violence, extended dialogue including the pervasive use of profanity and references to popular culture.
Tarantino began his career as an independent filmmaker with the release of the crime film Reservoir Dogs in 1992. His second film, Pulp Fiction (1994), a dark comedy crime thriller, was a major success with critics and audiences winning numerous awards, including the Palme d'Or and the Academy Award for Best Original Screenplay. In 1996, he appeared in From Dusk till Dawn, also writing the screenplay. Tarantino's third film, Jackie Brown (1997), paid homage to blaxploitation films.
In 2003, Tarantino directed Kill Bill: Volume 1, inspired by the traditions of martial arts films; it was followed by Volume 2 in 2004. He then made the exploitation-slasher Death Proof (2007), part of a double feature with Robert Rodriguez released under the collective title Grindhouse. His next film, Inglourious Basterds (2009), follows an alternate account of World War II. He followed this with Django Unchained (2012), a slave revenge Spaghetti Western, which won him his second Academy Award for Best Original Screenplay. His eighth film The Hateful Eight (2015) is a revisionist Western thriller and opened to audiences with a roadshow release. His most recent film, Once Upon a Time in Hollywood (2019), is a comedy drama set in the late 1960s about the transition of Old Hollywood to New Hollywood. A novelization of the film was also published in 2021, becoming his debut novel.
Tarantino's work has been subject to controversy, such as the depictions of violence, frequent inclusion of racial slurs and the alleged negligence of safety in his handling of stunt scenes on Kill Bill: Volume 2. During Tarantino's career, his films have garnered a cult following, as well as critical and commercial success, he has been considered "the single most influential director of his generation". Apart from receiving the Palme d'Or and two Academy Awards, his other major awards include two BAFTAs and four Golden Globes.
|
Extract a bullet point list in the format {movie} ({date}) from the life of director Quentin Tarantino
|
- Reservoir Dogs (1992)
- Pulp Fiction (1994)
- From Dusk till Dawn (1996)
- Jackie Brown (1997)
- Kill Bill: Volume 1 (2003)
- Kill Bill: Volume 1 (2004)
- Death Proof (2007)
- Inglourious Basterds (2009),
- Django Unchained (2012)
- The Hateful Eight (2015)
- Once Upon a Time in Hollywood (2019)
|
null | false
| null |
Classify each of the following as either a plant or an animal: Oak Tree, mouse, grass, daisy, zebra, venus fly trap, hawk, whale.
|
The following are plants: Oak Tree, grass, daisy, venus fly trap. The following are animals: Mouse, Zebra, hawk, whale.
|
|
null | false
| null |
Which is a species of fish? Banjo or Guitar
|
Banjo
|
|
null | false
| null |
The Unix operating system was conceived and implemented in 1969, at AT&T's Bell Labs, in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, and Joe Ossanna. First released in 1971, Unix was written entirely in assembly language, as was common practice at the time. In 1973, in a key pioneering approach, it was rewritten in the C programming language by Dennis Ritchie (with the exception of some hardware and I/O routines). The availability of a high-level language implementation of Unix made its porting to different computer platforms easier.
Due to an earlier antitrust case forbidding it from entering the computer business, AT&T licensed the operating system's source code as a trade secret to anyone who asked. As a result, Unix grew quickly and became widely adopted by academic institutions and businesses. In 1984, AT&T divested itself of its regional operating companies, and was released from its obligation not to enter the computer business; freed of that obligation, Bell Labs began selling Unix as a proprietary product, where users were not legally allowed to modify it.
Onyx Systems began selling early microcomputer-based Unix workstations in 1980. Later, Sun Microsystems, founded as a spin-off of a student project at Stanford University, also began selling Unix-based desktop workstations in 1982. While Sun workstations didn't utilize commodity PC hardware like Linux was later developed for, it represented the first successful commercial attempt at distributing a primarily single-user microcomputer that ran a Unix operating system.
With Unix increasingly "locked in" as a proprietary product, the GNU Project, started in 1983 by Richard Stallman, had the goal of creating a "complete Unix-compatible software system" composed entirely of free software. Work began in 1984. Later, in 1985, Stallman started the Free Software Foundation and wrote the GNU General Public License (GNU GPL) in 1989. By the early 1990s, many of the programs required in an operating system (such as libraries, compilers, text editors, a command-line shell, and a windowing system) were completed, although low-level elements such as device drivers, daemons, and the kernel, called GNU Hurd, were stalled and incomplete.
MINIX was created by Andrew S. Tanenbaum, a computer science professor, and released in 1987 as a minimal Unix-like operating system targeted at students and others who wanted to learn operating system principles. Although the complete source code of MINIX was freely available, the licensing terms prevented it from being free software until the licensing changed in April 2000.
Although not released until 1992, due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux.
Linus Torvalds has stated on separate occasions that if the GNU kernel or 386BSD had been available at the time (1991), he probably would not have created Linux.
|
History of Linux
|
Linux, computer operating system created in the early 1990s by Finnish software engineer Linus Torvalds and the Free Software Foundation (FSF).
While still a student at the University of Helsinki, Torvalds started developing Linux to create a system similar to MINIX, a UNIX operating system. In 1991 he released version 0.02; Version 1.0 of the Linux kernel, the core of the operating system, was released in 1994. About the same time, American software developer Richard Stallman and the FSF made efforts to create an open-source UNIX-like operating system called GNU. In contrast to Torvalds, Stallman and the FSF started by creating utilities for the operating system first. These utilities were then added to the Linux kernel to create a complete system called GNU/Linux, or, less precisely, just Linux.
|
1711.10124
| true
| null |
Our system, including software and corpus, is available as an open source project for free research purpose and we believe that it is a good baseline for the development and comparison of future Vietnamese SRL systems. We plan to integrate this tool to Vitk, an open-source toolkit for processing Vietnamese text, which contains fundamental processing tools and are readily scalable for processing very large text data.
Our system, including software and corpus, is available as an open source project for free research purpose and we believe that it is a good baseline for the development and comparison of future Vietnamese SRL systems.
|
Are their corpus and software public?
|
Yes.
|
2001.05970
| false
| null |
In order to understand the latent topics of those #MeToo tweets for college followers, we first utilize Latent Dirichlet Allocation (LDA) to label universal topics demonstrated by the users. We determine the optimal topic number by selecting the one with the highest coherence score. Since certain words frequently appear in those #MeToo tweets (e.g., sexual harassment, men, women, story, etc.), we transform our corpus using TF-IDF, a term-weighting scheme that discounts the influence of common terms.
In order to understand the latent topics of those #MeToo tweets for college followers, we first utilize Latent Dirichlet Allocation (LDA) to label universal topics demonstrated by the users.
Since certain words frequently appear in those #MeToo tweets (e.g., sexual harassment, men, women, story, etc.), we transform our corpus using TF-IDF, a term-weighting scheme that discounts the influence of common terms.
|
How are the topics embedded in the #MeToo tweets extracted?
|
Using Latent Dirichlet Allocation on TF-IDF transformed from the corpus
|
null | false
| 168
|
Knowledge representation and reasoning (KRR) is the process of representing the domain knowledge in formal languages (e.g., SPARQL, Prolog) such that it can be used by expert systems to execute querying and reasoning services. KRR have been applied in many fields including financial regulations, medical diagnosis, laws, and so on. One major obstacle in KRR is the creation of large-scale knowledge bases with high quality. For one thing, this requires the knowledge engineers (KEs) not only to have the background knowledge in a certain domain but have enough skills in knowledge representation as well. Unfortunately, qualified KEs are also in short supply. Therefore, it would be useful to build a tool that allows the domain experts without any background in logic to construct and query the knowledge base simply from text.
Controlled natural languages (CNLs) BIBREF0 were developed as a technology that achieves this goal. CNLs are designed based on natural languages (NLs) but with restricted syntax and interpretation rules that determine the unique meaning of the sentence. Representative CNLs include Attempto Controlled English BIBREF1 and PENG BIBREF2 . Each CNL is developed with a language parser which translates the English sentences into an intermediate structure, discourse representation structure (DRS) BIBREF3 . Based on the DRS structure, the language parsers further translate the DRS into the corresponding logical representations, e.g., Answer Set Programming (ASP) BIBREF4 programs. One main issue with the aforementioned CNLs is that the systems do not provide enough background knowledge to preserve semantic equivalences of sentences that represent the same meaning but are expressed via different linguistic structures. For instance, the sentences Mary buys a car and Mary makes a purchase of a car are translated into different logical representations by the current CNL parsers. As a result, if the user ask a question who is a buyer of a car, these systems will fail to find the answer.
In this thesis proposal, I will present KALM BIBREF5 , BIBREF6 , a system for knowledge authoring and question answering. KALM is superior to the current CNL systems in that KALM has a complex frame-semantic parser which can standardize the semantics of the sentences that express the same meaning via different linguistic structures. The frame-semantic parser is built based on FrameNet BIBREF7 and BabelNet BIBREF8 where FrameNet is used to capture the meaning of the sentence and BabelNet BIBREF8 is used to disambiguate the meaning of the extracted entities from the sentence. Experiment results show that KALM achieves superior accuracy in knowledge authoring and question answering as compared to the state-of-the-art systems.
The rest parts are organized as follows: Section SECREF2 discusses the related works, Section SECREF3 presents the KALM architecture, Section SECREF4 presents KALM-QA, the question answering part of KALM, Section SECREF5 shows the evaluation results, Section SECREF6 shows the future work beyond the thesis, and Section SECREF7 concludes the paper.
In this thesis proposal, I will present KALM, a system for knowledge authoring and question answering.
|
What is the KALM?
|
It is a system for knowledge authoring and question answering.
|
null | false
| null |
What is RBI?
|
The Reserve Bank of India, chiefly known as RBI, is India's central bank and regulatory body responsible for regulation of the Indian banking system. It is under the ownership of Ministry of Finance, Government of India. It is responsible for the control, issue and maintaining supply of the Indian rupee.
It also manages the country's main payment systems and works to promote its economic development. Bharatiya Reserve Bank Note Mudran (BRBNM) is a specialised division of RBI through which it prints and mints Indian currency notes (INR) in four of its currency printing presses located in Nashik (Maharashtra; Western India), Dewas (Madhya Pradesh; Central India), Mysore (Karnataka; Southern India) and Salboni (West Bengal; Eastern India).
The RBI established the National Payments Corporation of India as one of its specialised division to regulate the payment and settlement systems in India. Deposit Insurance and Credit Guarantee Corporation was established by RBI as one of its specialised division for the purpose of providing insurance of deposits and guaranteeing of credit facilities to all Indian banks.
|
|
null | false
| 147
|
CLIR systems retrieve documents written in a language that is different from search query language BIBREF0 . The primary objective of CLIR is to translate or project a query into the language of the document repository BIBREF1 , which we refer to as Retrieval Corpus (RC). To this end, common CLIR approaches translate search queries using a Machine Translation (MT) model and then use a monolingual IR system to retrieve from RC. In this process, a translation model is treated as a black box BIBREF2 , and it is usually trained on a sentence level parallel corpus, which we refer to as Translation Corpus (TC).
We address a pitfall of using existing MT models for query translation BIBREF1 . An MT model trained on TC does not have any knowledge of RC. In an extreme setting, where there are no common terms between the target side of TC and RC, a well trained and tested translation model would fail because of vocabulary mismatch between the translated query and documents of RC. Assuming a relaxed scenario where some commonality exists between two corpora, a translation model might still perform poorly, favoring terms that are more likely in TC but rare in RC. Our hypothesis is that a search query translation model would perform better if a translated query term is likely to appear in the both retrieval and translation corpora, a property we call balanced translation.
To achieve balanced translations, it is desired to construct an MT model that is aware of RC vocabulary. Different types of MT approaches have been adopted for CLIR task, such as dictionary-based MT, rule-based MT, statistical MT etc. BIBREF3 . However, to the best of our knowledge, a neural search query translation approach has yet to be taken by the community. NMT models with attention based encoder-decoder techniques have achieved state-of-the-art performance for several language pairs BIBREF4 . We propose a multi-task learning NMT architecture that takes RC vocabulary into account by learning Relevance-based Auxiliary Task (RAT). RAT is inspired from two word embedding learning approaches: Relevance-based Word Embedding (RWE) BIBREF5 and Continuous Bag of Words (CBOW) embedding BIBREF6 . We show that learning NMT with RAT enables it to generate balanced translation.
NMT models learn to encode the meaning of a source sentence and decode the meaning to generate words in a target language BIBREF7 . In the proposed multi-task learning model, RAT shares the decoder embedding and final representation layer with NMT. Our architecture answers the following question: In the decoding stage, can we restrict an NMT model so that it does not only generate terms that are highly likely in TC?. We show that training a strong baseline NMT with RAT roughly achieves 16% improvement over the baseline. Using a qualitative analysis, we further show that RAT works as a regularizer and prohibits NMT to overfit to TC vocabulary.
We address a pitfall of using existing MT models for query translation (Sokokov et al., 2013). An MT model trained on TC does not have any knowledge of RC. In an extreme setting, where there are no common terms between the target side of TC and RC, a well trained and tested translation model would fail because of vocabulary mismatch between the translated query and documents of RC.
|
What is the pitfall of using existing MT models for query translation?
|
An MT model trained on TC does not have any knowledge of RC. In an extreme setting, where there are no common terms between the target side of TC and RC, a well trained and tested translation model would fail because of vocabulary mismatch between the translated query and documents of RC.
|
null | false
| null |
Which characters belong to DC or Marvel Universe? Lex Luthor, Doctor Doom
|
Doctor Doom is Marvel, Lex Luthor is DC
|
|
null | false
| null |
Alice Twemlow is a writer, critic and educator from the United Kingdom whose work focuses on graphic design. She has been a guest critic at the Yale University School of Art, Maryland Institute College of Art (MICA), and Rhode Island School of Design (RISD). In 2006, the School of Visual Arts (SVA) in New York named Twemlow the chair and co-founder of its Master of Fine Arts in Design Criticism (D-Crit). According to her SVA biography: "Alice Twemlow writes for Eye, Design Issues, I.D., Print, New York magazine and The Architect’s Newspaper." Twemlow is also a contributor to the online publication Voice: AIGA Journal of Design. In 2012 Core77 selected her as a jury captain for the “Design Writing and Commentary” category of the Core77 Design Awards. Twemlow was head of the MA in Design Curating & Writing at Design Academy Eindhoven, 2017-2018, and is now Lector Design at the Royal Academy of Fine Arts (KABK) in The Hague, and Associate Professor at Leiden University.
|
Where has Alice Twemlow been a guest critic?
|
At the Yale University School of Art, the Maryland Institute College of Art, and the Rhode Island School of Design.
|
null | false
| 214
|
After we collected the conversations from both Reddit and Gab, we presented this data to Mechanical Turk workers to label and create intervention suggestions. In order not to over-burden the workers, we filtered out conversations consisting of more than 20 comments. Each assignment consists of 5 conversations. For Reddit, we also present the title and content of the corresponding submission in order to give workers more information about the topic and context. For each conversation, a worker is asked to answer two questions:
Q1: Which posts or comments in this conversation are hate speech?
Q2: If there exists hate speech in the conversation, how would you respond to intervene? Write down a response that can probably hold it back (word limit: 140 characters).
If the worker thinks no hate speech exists in the conversation, then the answers to both questions are “n/a”. To provide context, the definition of hate speech from Facebook: “We define hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.” is presented to the workers. Also, to prevent workers from using hate speech in the response or writing responses that are too general, such as “Please do not say that”, we provide additional instructions and rejected examples.
Each assignment consists of 5 conversations.
|
How many conversations does the assignment consist of?
|
Each assignment consists of 5 conversations.
|
null | false
| null |
Vietnam or Viet Nam[n 3] (commonly abbreviated VN; Vietnamese: Việt Nam, [vîət nāːm] (listen)), officially the Socialist Republic of Vietnam (SRV),[n 4] is a country in Southeast Asia. It is located at the eastern edge of mainland Southeast Asia, with an area of 331,212 square kilometres (127,882 sq mi) and population of 99 million, making it the world's fifteenth-most populous country. Vietnam borders China to the north, and Laos and Cambodia to the west. It shares maritime borders with Thailand through the Gulf of Thailand, and the Philippines, Indonesia, and Malaysia through the South China Sea. Its capital is Hanoi and its largest city is Ho Chi Minh City (commonly referred to by its former name, Saigon).
|
Given this information about Vietnam, which country does it share its northern border with?
|
China
|
null | false
| null |
Classify each of the following as either round or square shape: wheel, paper napkin, building block, ring, coin, ball, grid, bread, bubble, dice, hula hoop, window, stamp, earth, crossword, tambourine, bagel, keyboard key, cube, letter O.
|
Objects that have circular (round) shape include: bagel, ball, bubble, coin, earth, hula hoop, letter O, ring, tambourine, wheel.
Objects that have square shape include: bread, building blocks, crossword, cube, dice, grid, keyboard key, paper napkin, stamp, window.
|
|
null | false
| 60
|
Our speech corpus consists of approximately 3000 hours of speech data including various domains and speaking styles. The data is augmented with noise and room impulse respond to increase the quantity and prevent over-fitting.
Our speech corpus consists of approximately 3000 hours of speech data including various domains and speaking styles.
|
How much speech data do they use?
|
Approximately 3000 hours
|
1808.10267
| false
| null |
We propose a multi-source framework for injecting linearized source parses into NMT. This model consists of two identical RNN encoders with no shared parameters, as well as a standard RNN decoder. For each target sentence, two versions of the source sentence are used: the sequential (standard) version and the linearized parse (lexicalized or unlexicalized). Each of these is encoded simultaneously using the encoders; the encodings are then combined and used as input to the decoder. We combine the source encodings using the hierarchical attention combination proposed by libovicky2017attention. This consists of a separate attention mechanism for each encoder; these are then combined using an additional attention mechanism over the two separate context vectors. This multi-source method is thus able to combine the advantages of both standard RNN-based encodings and syntactic encodings.
This model consists of two identical RNN encoders with no shared parameters, as well as a standard RNN decoder. For each target sentence, two versions of the source sentence are used: the sequential (standard) version and the linearized parse (lexicalized or unlexicalized).
|
What kind of encoders are used for the parsed source sentence?
|
The answers are shown as follows:
* RNN encoders
|
null | false
| null |
Which of these is an electronic device? Ipad, TV, Rock, Laptop, Tree.
|
Sure. Ipad, TV, and Laptop are all electronic devices.
|
|
null | false
| null |
Can planets in the Solar System be seen with the naked eye?
|
Yes, under the correct weather conditions, Mercury, Venus, Mars, Saturn and Jupiter can be seen with the naked eye. Other planets in the Solar System can only be observed with strong telescopes. Venus is the third brightest object in the sky and can be seen usually during twilight.
|
|
null | false
| 64
|
Our system comprises of following three steps:
Cloze generation: Most of the documents typically follow a template, they begin with an introduction that provides an overview and a brief summary for what is to follow. We assume such a structure while constructing our cloze style questions. When there is no clear demarcation, we treat the first $K\%$ (hyperparameter, in our case 20%) of the document as the introduction. While noisy, this heuristic generates a large number of clozes given any corpus, which we found to be beneficial for semi-supervised learning despite the noise.
We use a standard NLP pipeline based on Stanford CoreNLP (for SQuAD, TrivaQA and PubMed) and the BANNER Named Entity Recognizer (only for PubMed articles) to identify entities and phrases. Assume that a document comprises of introduction sentences $\lbrace q_1, q_2, ... q_n\rbrace $ , and the remaining passages $\lbrace p_1, p_2, .. p_m\rbrace $ . Additionally, let's say that each sentence $q_i$ in introduction is composed of words $\lbrace w_1, w_2, ... w_{l_{q_i}}\rbrace $ , where $l_{q_i}$ is the length of $q_i$ . We consider a $\text{match} (q_i, p_j)$ , if there is an exact string match of a sequence of words $\lbrace w_k, w_{k+1}, .. w_{l_{q_i}}\rbrace $ between the sentence $q_i$ and passage $p_j$ . If this sequence is either a noun phrase, verb phrase, adjective phrase or a named entity in $\lbrace p_1, p_2, .. p_m\rbrace $0 , as recognized by CoreNLP or BANNER, we select it as an answer span $\lbrace p_1, p_2, .. p_m\rbrace $1 . Additionally, we use $\lbrace p_1, p_2, .. p_m\rbrace $2 as the passage $\lbrace p_1, p_2, .. p_m\rbrace $3 and form a cloze question $\lbrace p_1, p_2, .. p_m\rbrace $4 from the answer bearing sentence $\lbrace p_1, p_2, .. p_m\rbrace $5 by replacing $\lbrace p_1, p_2, .. p_m\rbrace $6 with a placeholder. As a result, we obtain passage-question-answer ( $\lbrace p_1, p_2, .. p_m\rbrace $7 ) triples (Table 1 shows an example). As a post-processing step, we prune out $\lbrace p_1, p_2, .. p_m\rbrace $8 triples where the word overlap between the question (Q) and passage (P) is less than 2 words (after excluding the stop words).
The process relies on the fact that answer candidates from the introduction are likely to be discussed in detail in the remainder of the article. In effect, the cloze question from the introduction and the matching paragraph in the body forms a question and context passage pair. We create two cloze datasets, one each from Wikipedia corpus (for SQuAD and TriviaQA) and PUBMed academic papers (for the BioASQ challenge), consisting of 2.2M and 1M clozes respectively. From analyzing the cloze data manually, we were able to answer 76% times for the Wikipedia set and 80% times for the PUBMed set using the information in the passage. In most cases the cloze paraphrased the information in the passage, which we hypothesized to be a useful signal for the downstream QA task.
We also investigate the utility of forming subsets of the large cloze corpus, where we select the top passage-question-answer triples, based on the different criteria, like i) jaccard similarity of answer bearing sentence in introduction and the passage ii) the tf-idf scores of answer candidates and iii) the length of answer candidates. However, we empirically find that we were better off using the entire set rather than these subsets.
Pre-training: We make use of the generated cloze dataset to pre-train an expressive neural network designed for the task of reading comprehension. We work with two publicly available neural network models – the GA Reader BIBREF2 (to enable comparison with prior work) and BiDAF + Self-Attention (SA) model from BIBREF1 (which is among the best performing models on SQuAD and TriviaQA). After pretraining, the performance of BiDAF+SA on a dev set of the (Wikipedia) cloze questions is 0.58 F1 score and 0.55 Exact Match (EM) score. This implies that the cloze corpus is neither too easy, nor too difficult to answer.
Fine Tuning: We fine tune the pre-trained model, from the previous step, over a small set of labelled question-answer pairs. As we shall later see, this step is crucial, and it only requires a handful of labelled questions to achieve a significant proportion of the performance typically attained by training on tens of thousands of questions.
Fine Tuning: We fine tune the pre-trained model, from the previous step, over a small set of labelled question-answer pairs. As we shall later see, this step is crucial, and it only requires a handful of labelled questions to achieve a significant proportion of the performance typically attained by training on tens of thousands of questions.
|
Is fine-tuning required by the pre-trained model?
|
Yes.
|
null | false
| null |
What movies made Seattle famous in the 1990s?
|
Three films were iconic and set in Seattle in the 1990s. Sleepless in Seattle (1993), Singles (1992) & Reality Bites (1994)
|
|
null | false
| null |
Which continent does the following countries belong to: USA, Sweden, China, Taiwan, Japan, Germany, Great Britain, Canada, Peru, Chile
|
USA belongs to the continent North America, Sweden belongs to the continent Europe, China belongs to the. continent Asia, Taiwan belongs to the continent Asia, Japan is in Asia, Germany is in Europe, Great Britain is in Europe, Canada resides in the continent North America, Peru resides in the continent South America Chile is in South America.
|
|
null | false
| null |
Franklin Edson (April 5, 1832 – September 24, 1904) was an American merchant who served as the 85th Mayor of New York from 1883 to 1884.
Early life
Edson was born in Chester, Vermont on April 5, 1832, where his father had a farm. A descendant of the Puritans, he was the son of Soviah (née Wilson) Edson and Opher Edson.
He was educated at the local schools and at the Chester Academy in Vermont.
Career
Business
At age twenty, Edson moved to Albany to work in his brother Cyrus' distillery, becoming a partner three years later.
He left the distillery after his brother's death and started a produce business, which he relocated to New York City in 1866. His venture proved successful during the American Civil War, making Edson wealthy and enabling him to engage in civic, religious and charitable causes. He was an active Episcopalian and a member of Saint James Church, Fordham, in the Bronx.
In 1873, he became one of the city's most important business leaders when he was appointed President of the New York Produce Exchange.
Politics
An anti-Tammany Democrat, in 1882 he was nominated for Mayor through the efforts of Tammany Hall boss John Murphy to avoid a Democratic Party split between organization loyalists and reformers. Upon taking office in 1883, he angered reformers by appointing Tammany men to key jobs, but he soon embraced civil service reform and other honest government measures.
During his term the Brooklyn Bridge was dedicated, the Manhattan Municipal Building was constructed, and work was completed on the city's new water supply, the Croton Aqueduct. He appointed the commission responsible for the selection and location of public lands for parks in the Bronx, which came to include Van Cortlandt, Bronx, Pelham Bay, Crotona, Claremont and St. Mary's Parks, and the Mosholu, Bronx River, Pelham, and Crotona Parkways.
After Edson split with Tammany the 1884 Democratic nomination for Mayor went to William Russell Grace, who had also preceded Edson as Mayor, and Edson retired from politics at the completion of his term in 1885.
After leaving the mayor's office, Edson returned to his business interests and continued his philanthropic activities.
Personal life
In 1856, Edson was married to Frances Cameron "Fannie" Wood (1835–1893), the daughter of Benjamin Howland Wood. Fannie was the granddaughter of Jethro Wood, inventor of the cast-iron moldboard plow. They owned a homestead in Morris Heights, Bronx that consisted of three acres, a stately residence, stable and barn, which he exchanged for 247 Central Park West, two doors south of the corner of 85th Street, in 1893. Together, Fannie and Franklin were the parents of:
Cyrus Edson (1857–1903), a doctor who married Virginia Churchill Paige (1853–1891). He served as the President of the Board of Health.
Franklin (1859-1926), who married Elsie Squier.
David Orr Edson (1862–1923), a doctor.
Henry Townsend Edson (1864–1903), who married Margarita Diehl. In 1903, Henry Edson murdered his friend's wife and then committed suicide.
Edith Edson (b. 1870), who married Willis Benner.
Robert S. (1873-1941), who married Fanny Ropes.
Ethel Townsend Edson (b. 1877), who married Arthur Hoffman Van Brunt (b. 1865).
He died at his home in Manhattan on September 24, 1904. He was buried in Section 15, Lot 16 at Albany Rural Cemetery, Menands, New York.
Legacy
Edson Avenue in The Bronx is named for him.
|
What is the Summary of this page?
|
Franklin Edson is a former Mayor of New York. He started a produce business and became one of the city's most important business leaders. Later, he joined politics as an anti-Tammany Democrat and became the mayor of New York in 1883. He led the effort for public lands for parks in the Bronx. He was married to Frances Cameron "Fannie" Wood and they had 7 children.
|
null | false
| null |
Classify each of the following as part of the upper or lower body: arm, toe, finger, ankle, elbow, chin, chest, shin
|
Arm: Upper Body
Toe: Lower Body
Finger: Upper Body
Ankle: Lower Body
Elbow: Upper Body
Chin: Upper Body
Chest: Upper Body
Shin: Lower Body
|
|
null | false
| 71
|
In the era of social media and networking platforms, Twitter has been doomed for abuse and harassment toward users specifically women. In fact, online harassment becomes very common in Twitter and there have been a lot of critics that Twitter has become the platform for many racists, misogynists and hate groups which can express themselves openly. Online harassment is usually in the form of verbal or graphical formats and is considered harassment, because it is neither invited nor has the consent of the receipt. Monitoring the contents including sexism and sexual harassment in traditional media is easier than monitoring on the online social media platforms like Twitter. The main reason is because of the large amount of user generated content in these media. So, the research about the automated detection of content containing sexual harassment is an important issue and could be the basis for removing that content or flagging it for human evaluation. The basic goal of this automatic classification is that it will significantly improve the process of detecting these types of hate speech on social media by reducing the time and effort required by human beings.
Previous studies have been focused on collecting data about sexism and racism in very broad terms or have proposed two categories of sexism as benevolent or hostile sexism BIBREF0, which undermines other types of online harassment. However, there is no much study focusing on different types online harassment alone attracting natural language processing techniques.
In this paper we present our work, which is a part of the SociaL Media And Harassment Competition of the ECML PKDD 2019 Conference. The topic of the competition is the classification of different types of harassment and it is divided in two tasks. The first one is the classification of the tweets in harassment and non-harassment categories, while the second one is the classification in specific harassment categories like indirect harassment, physical and sexual harassment as well. We are using the dataset of the competition, which includes text from tweets having the aforementioned categories. Our approach is based on the Recurrent Neural Networks and particularly we are using a deep, classification specific attention mechanism. Moreover, we present a comparison between different variations of this attention-based approach like multi-attention and single attention models. The next Section includes a short description of the related work, while the third Section includes a description of the dataset. After that, we describe our methodology. Finally, we describe the experiments and we present the results and our conclusion.
Previous studies have been focused on collecting data about sexism and racism in very broad terms or have proposed two categories of sexism as benevolent or hostile sexism [1], which undermines other types of online harassment. However, there is no much study focusing on different types online harassment alone attracting natural language processing techniques.
|
What were the shortcomings of previous studies?
|
There is no much study focusing on different types online harassment alone attracting natural language processing techniques.
|
1909.13668
| false
| null |
We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space. For all experiments, we use the objective function of eqn. DISPLAY_FORM6 with $\beta =1$. We do not use larger $\beta $s because the constraint $\text{KL}=C$ is always satisfied.
We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space.
|
What different properties of the posterior distribution are explored in the paper?
|
The answers are shown as follows:
* interdependence between rate and distortion
* impact of KL on the sharpness of the approximated posteriors
* demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities
* some experiments to find if any form of syntactic information is encoded in the latent space
|
null | false
| 74
|
Table TABREF26 reports the evaluation of our approach on both the CyberAttack and PoliticianDeath event categories. Our approach is configured such that each iteration starts with 1 new keyword discovered in the previous iteration.
Our approach improves LR by 5.17% (Accuracy) and 18.38% (AUC), and MLP by 10.71% (Accuracy) and 30.27% (AUC) on average. Such significant improvements clearly demonstrate that our approach is effective at improving model performance. We observe that the target models generally converge between the 7th and 9th iteration on both datasets when performance is measured by AUC. The performance can slightly degrade when the models are further trained for more iterations on both datasets. This is likely due to the fact that over time, the newly discovered keywords entail lower novel information for model training. For instance, for the CyberAttack dataset the new keyword in the 9th iteration `election' frequently co-occurs with the keyword `russia' in the 5th iteration (in microposts that connect Russian hackers with US elections), thus bringing limited new information for improving the model performance. As a side remark, we note that the models converge faster when performance is measured by accuracy. Such a comparison result confirms the difference between the metrics and shows the necessity for more keywords to discriminate event-related microposts from non event-related ones.
Our approach improves LR by 5.17% (Accuracy) and 18.38% (AUC), and MLP by 10.71% (Accuracy) and 30.27% (AUC) on average.
|
How much does the approach improve LR's accuracy?
|
5.17%.
|
null | false
| null |
List five titles by Veronica Roth.
|
Divergent, Allegiant, Insurgent, Carve the Mark, Chosen Ones
|
|
null | false
| null |
Identify which instrument is string or percussion: Celesta, Masenqo
|
Masenqo is string, Celesta is percussion.
|
|
null | false
| 93
|
“Laughter is the best Medicine” is a saying which is popular with most of the people. Humor is a form of communication that bridges the gap between various languages, cultures, ages and demographics. That's why humorous content with funny and witty hashtags are so much popular on social media. It is a very powerful tool to connect with the audience. Automatic Humor Recognition is the task of determining whether a text contains some level of humorous content or not. First conference on Computational humor was organized in 1996, since then many research have been done in this field. kao2016computational does pun detection in one-liners and dehumor detects humor in Yelp reviews. Because of the complex and interesting aspects involved in detecting humor in texts, it is one of the challenging research field in Natural Language Processing BIBREF3 . Identifying humor in a sentence sometimes require a great amount of external knowledge to completely understand it. There are many types of humor, namely anecdotes, fantasy, insult, irony, jokes, quote, self deprecation etc BIBREF4 , BIBREF5 . Most of the times there are different meanings hidden inside a sentence which is grasped differently by individuals, making the task of humor identification difficult, which is why the development of a generalized algorithm to classify different type of humor is a challenging task.
Majority of the researches on social media texts is focused on English. A study by schroeder2010half shows that, a high percentage of these texts are in non-English languages. fischer2011language gives some interesting information about the languages used on Twitter based on the geographical locations. With a huge amount of such user generated data available on social media, there is a need to develop technologies for non-English languages. In multilingual regions like South Asia, majority of the social media users speak more than two languages. In India, Hindi is the most spoken language (spoken by 41% of the population) and English is the official language of the country. Twitter has around 23.2 million monthly active users in India. Native speakers of Hindi often put English words in the sentences and transliterate the whole sentence to Latin script while posting on social media, thereby making the task of automatic text classification a very challenging problem. Linguists came up with a term for any type of language mixing, known as `code-mixing' or `code-switching' BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 . Both the terms are used interchangeably, but there is a slight difference between the two terms. Code-mixing refers to the insertion of words, phrases, and morphemes of one language into a statement or an expression of another language, whereas transliteration of every word in a sentence to another script ( here Devanagari to Latin) is coined code-switching BIBREF10 . The first tweet in Figure 1 is an example of code-mixing and second is an example of code-switching. In this paper, we use code-mixing to denote both cases.
In this paper, we present a freely available corpus containing code-mixed tweets in Hindi and English language with tweets written in Latin script. Tweets are manually classified into humorous and non-humorous classes. Moreover, each token in the tweets is also given a language tag which determines the source or origin language of the token (English or Hindi). The paper is divided in sections as follows, we start by describing the corpus and the annotation scheme in Section 2. Section 3 summarizes our supervised classification system which includes pre-processing of the tweets in the dataset and the feature extraction followed by the method used to identify humor in tweets. In the next subsection, we describe the classification model and the results of the experiments conducted using character and word level features. In the last section, we conclude the paper followed by future work and references.
In this paper, we present a freely available corpus containing code-mixed tweets in Hindi and English language with tweets written in Latin script.
|
What languages are part of the corpus?
|
Hindi and English.
|
null | false
| null |
What is the fastest point of sail?
|
The fastest point of sail on a traditional sailing vessel is the beam reach.
|
|
null | false
| null |
Parsenn is a ski area near Davos, Switzerland, offering 35 ski runs.
The Parsenn area is the largest and most modern of Davos' five mountains.
The recently updated Parsenn "red railway" funicular covers the difference of 1,100 metres (3,609 ft) up to the Weissfluhjoch ridge. One can also get from Klosters to Gotschna/Parsenn with a large cable car which runs from here run up to 17 kilometres (10.6 mi) into the Prättigau valley.
|
Is Parsenn nice for skiing
|
Parsenn is a ski area near Davos, Switzerland, offering 35 ski runs. The Parsenn area is the largest and most modern of Davos' five mountains and is considered as very nice.
|
null | false
| null |
Why is it so rare to see a unseeded tennis player win a grand slam?
|
To raise a grand slam trophy, a player has to win 7 consecutive best of 5 set matches and in all likelihood against several top 10 players. Tennis is considered to be a demanding sport both physically and mentally. The players have to be in top shape and be ready to duel out many tough 5 set matches during the course of the slam. During grand slam matches, players can run anywhere from 3 to 4 miles with most of the movement being lateral. The top players have the distinct advantage that they have large support staff including dedicated physios and massage therapists who can get the player ready for subsequent matches. Besides, the top players have coaches who may have themselves won slams, and thus carry a wealth of experience. Sometimes players also enlist the services of data analysts who can do opponent video/data analysis to determine patterns/preferences of opponents. Such massive support staff carry a lot of payroll that most unseeded players cannot afford. For this reason, though at times we see a qualifier make it through to later stages of a slam, they will rarely ever get past quarter finals let alone win a grand slam.
|
|
null | false
| null |
Classify these Indian cities into states that they are in.
Hyderabad, Kolkata, Purulia, Medak, Warangal, Darjeeling
|
West Bengal - Kolkata, Purulia, Darjeeling
Telengana - Hyderabad, Medak, Warangal
|
|
null | false
| null |
What is the difference between AWS ECS and EKS?
|
ECS is a scalable container orchestration platform that allows users to run, stop, and manage containers in a cluster. EKS, on the other hand, helps teams to build Kubernetes clusters on AWS without having to install Kubernetes on EC2 compute instances manually.
|
|
null | false
| null |
Describe some different ways to stop your car from being stolen?
|
Here are some suggestions to reduce the chances of your car being stolen. Firstly, you can always try to park the car in a secure location, secondly security items such as steering wheel locks are known to help reduce the chance of car theft. Lastly, you can also install a car alarm system, and if that is too expensive or difficult to retrofit, there are companies who sell the security stickers for cars that state that the car has GPS tracking, which may also deter a potential car thief.
|
|
null | false
| null |
2020 California Proposition 22
Support
Lyft, Uber, DoorDash, Instacart, and Postmates contributed over $205 million into campaigns supporting Prop 22, making it the most expensive ballot measure in California's history. This included major funding for the Yes on Prop 22 campaign, and promoting the proposed legislation directly to customers when using their app. Beyond in-app messaging, Uber and Lyft also conducted extensive social media and television advertising to persuade voters. Such advertisements informed voters that a "no" outcome for the proposition would put many drivers' occupations in jeopardy, among other messages.
Sponsored support
Some of the companies also forced their workers to support and promote the legislation: Uber sent its drivers in-app messages forcing them to click on either "Yes on Prop 22" or "OK", Instacart ordered its workers to place pro-Prop 22 stickers in customers' shopping bags, and DoorDash forced delivery drivers to use bags saying "Yes on 22".
Title litigation
The ballot title, written by Attorney General Xavier Becerra, is "Exempts App-Based Transportation and Delivery Companies from Providing Employee Benefits to Certain Drivers. Initiative Statute". The Yes on Prop 22 campaign challenged this description as non-neutral in court, but their arguments were rejected by a Sacramento Superior Court judge.
Opposition
The No on Prop 22 campaign was funded by the California Labor Federation, with support from UC Berkeley Labor Center. The campaign received around $19 million in support, mostly from labor groups. Driver groups Rideshare Drivers United, Gig Workers Rising, We Drive Progress, and Mobile Workers United, spoke out against Prop 22. Editorial boards from the New York Times and Los Angeles Times also called on voters to reject Prop 22.
The proposition would add protections specific to app-based workers, different from other independent contractors, but these protections would only apply during the time the worker is "engaged" in fulfilling a specific request and not while the worker is logged in to the app and available to fulfill a request.
|
Based on the reference text, who were the main parties who supported and opposed Prop 22?
|
The main parties to support Prop 22 included Lyft, Uber, Doordash, Instacart, and Postmates. The main parties to oppose Prop 22 included California Labor Federation, Rideshare Drivers United, Gig Workers Rising, We Drive Progress, and Mobile Workers United.
|
null | false
| 76
|
Wikipedia is the largest source of open and collaboratively curated knowledge in the world. Introduced in 2001, it has evolved into a reference work with around 5m pages for the English Wikipedia alone. In addition, entities and event pages are updated quickly via collaborative editing and all edits are encouraged to include source citations, creating a knowledge base which aims at being both timely as well as authoritative. As a result, it has become the preferred source of information consumption about entities and events. Moreso, this knowledge is harvested and utilized in building knowledge bases like YAGO BIBREF0 and DBpedia BIBREF1 , and used in applications like text categorization BIBREF2 , entity disambiguation BIBREF3 , entity ranking BIBREF4 and distant supervision BIBREF5 , BIBREF6 .
However, not all Wikipedia pages referring to entities (entity pages) are comprehensive: relevant information can either be missing or added with a delay. Consider the city of New Orleans and the state of Odisha which were severely affected by cyclones Hurricane Katrina and Odisha Cyclone, respectively. While Katrina finds extensive mention in the entity page for New Orleans, Odisha Cyclone which has 5 times more human casualties (cf. Figure FIGREF2 ) is not mentioned in the page for Odisha. Arguably Katrina and New Orleans are more popular entities, but Odisha Cyclone was also reported extensively in national and international news outlets. This highlights the lack of important facts in trunk and long-tail entity pages, even in the presence of relevant sources. In addition, previous studies have shown that there is an inherent delay or lag when facts are added to entity pages BIBREF7 .
To remedy these problems, it is important to identify information sources that contain novel and salient facts to a given entity page. However, not all information sources are equal. The online presence of major news outlets is an authoritative source due to active editorial control and their articles are also a timely container of facts. In addition, their use is in line with current Wikipedia editing practice, as is shown in BIBREF7 that almost 20% of current citations in all entity pages are news articles. We therefore propose news suggestion as a novel task that enhances entity pages and reduces delay while keeping its pages authoritative.
Existing efforts to populate Wikipedia BIBREF8 start from an entity page and then generate candidate documents about this entity using an external search engine (and then post-process them). However, such an approach lacks in (a) reproducibility since rankings vary with time with obvious bias to recent news (b) maintainability since document acquisition for each entity has to be periodically performed. To this effect, our news suggestion considers a news article as input, and determines if it is valuable for Wikipedia. Specifically, given an input news article INLINEFORM0 and a state of Wikipedia, the news suggestion problem identifies the entities mentioned in INLINEFORM1 whose entity pages can improve upon suggesting INLINEFORM2 . Most of the works on knowledge base acceleration BIBREF9 , BIBREF10 , BIBREF11 , or Wikipedia page generation BIBREF8 rely on high quality input sources which are then utilized to extract textual facts for Wikipedia page population. In this work, we do not suggest snippets or paraphrases but rather entire articles which have a high potential importance for entity pages. These suggested news articles could be consequently used for extraction, summarization or population either manually or automatically – all of which rely on high quality and relevant input sources.
We identify four properties of good news recommendations: salience, relative authority, novelty and placement. First, we need to identify the most salient entities in a news article. This is done to avoid pollution of entity pages with only marginally related news. Second, we need to determine whether the news is important to the entity as only the most relevant news should be added to a precise reference work. To do this, we compute the relative authority of all entities in the news article: we call an entity more authoritative than another if it is more popular or noteworthy in the real world. Entities with very high authority have many news items associated with them and only the most relevant of these should be included in Wikipedia whereas for entities of lower authority the threshold for inclusion of a news article will be lower. Third, a good recommendation should be able to identify novel news by minimizing redundancy coming from multiple news articles. Finally, addition of facts is facilitated if the recommendations are fine-grained, i.e., recommendations are made on the section level rather than the page level (placement).
Approach and Contributions. We propose a two-stage news suggestion approach to entity pages. In the first stage, we determine whether a news article should be suggested for an entity, based on the entity's salience in the news article, its relative authority and the novelty of the article to the entity page. The second stage takes into account the class of the entity for which the news is suggested and constructs section templates from entities of the same class. The generation of such templates has the advantage of suggesting and expanding entity pages that do not have a complete section structure in Wikipedia, explicitly addressing long-tail and trunk entities. Afterwards, based on the constructed template our method determines the best fit for the news article with one of the sections.
We evaluate the proposed approach on a news corpus consisting of 351,982 articles crawled from the news external references in Wikipedia from 73,734 entity pages. Given the Wikipedia snapshot at a given year (in our case [2009-2014]), we suggest news articles that might be cited in the coming years. The existing news references in the entity pages along with their reference date act as our ground-truth to evaluate our approach. In summary, we make the following contributions.
We identify four properties of good news recommendations: salience, relative authority, novelty and placement.
|
What are the four attributes the authors identified as recommended for good news?
|
Salience, relative authority, novelty and placement.
|
null | false
| null |
Which of these famous football players have never won the Balon D'Or: Messi, Xavi, Zidane, Ronaldinho, Iniesta, Neymar
|
From the list of players provided the ones that have never won the Balon D'Or are Xavi, Iniesta and Neymar. All the other players in the list have won the Balon D'Or at least one time
|
|
null | false
| null |
What is elite shungite?
|
Elite shungite is a rare stone that comes from the Karelia region of Russia. There is no place in the world other than Russia where you can source this stone. It is around 2 billion years old. It is a a rare positive stone and is one of the only known natural materials known to contain fullerenes. Fullerenes are are powerful antioxidants. When these fullerenes get into our body its one of the best ways to fight free radicals.
|
|
null | false
| null |
Tell me whether these people are chess Grandmasters or professional boxers: Nikolai Krogius, Evgeny Gleizerov, Dmitry Bivol, Nikolai Valuev, and Alexander Alekhine.
|
Nikolai Krogius, Evgeny Gleizerov and Alexander Alekhine are chess Grandmasters. Instead, Dmitry Bivol and Nikolai Valuev are professional boxers.
|
|
1910.03814
| false
| null |
We annotate the gathered tweets using the crowdsourcing platform Amazon Mechanical Turk. There, we give the workers the definition of hate speech and show some examples to make the task clearer. We then show the tweet text and image and we ask them to classify it in one of 6 categories: No attacks to any community, racist, sexist, homophobic, religion based attacks or attacks to other communities. Each one of the $150,000$ tweets is labeled by 3 different workers to palliate discrepancies among workers.
We then show the tweet text and image and we ask them to classify it in one of 6 categories: No attacks to any community, racist, sexist, homophobic, religion based attacks or attacks to other communities.
|
What annotations are available in the dataset - tweat used hate speach or not?
|
The answers are shown as follows:
* No attacks to any community
* racist
* sexist
* homophobic
* religion based attacks
* attacks to other communities
|
null | false
| null |
Why are dogs generally more popular than cats as pets?
|
Although dogs generally seem to require more time and attention, they maintain their place as number 1 pet in the majority of countries. For example, statistics have shown that over 74% of US pet owners have a dog, in comparison with 47% owning a cat.
The reason for their popularity has to come down to their personality. Commonly, dogs are very social and affectionate creatures who enjoy spending time with their owners. Whereas, cats tend to be more independent animals by nature, and although they also require the attention of their owner, the majority prefer to keep to themselves. This stereotypical difference in behaviour is usually the main factor determining the popularity of these pets among people.
|
|
null | false
| null |
What is Bollywood
|
Bollywood is a movie industry based in Mumbai, India. It primarily makes movies in Hindi language. Since it was based in a city previously known as Bombay, a convenient moniker Bollywood resembling Hollywood is used to describe Hindi language film industry.
|
|
null | false
| null |
Tell me three key things to do to drive a car faster on a track
|
Driving a car on a track can be an exhilarating experience, but there a few rules to keep in mind. Here are three key things to remember:
1. Eyes up - Many drivers have a tendency to look directly at the end of the hood or at the next cone or apex. Keep your eyes up and to be able to assess where you want to go.
2. Look where you want to go - Your hands on the steering wheel follow your eyes, so look where you want to go, not what you want to avoid or navigate. For example. you should be looking at the exit of a turn as you enter the turn (or the apex if it's a wide turn)
3. Slow in, fast out - Cornering can be particularly tricky, because we tend to turn in too fast and too early, which can set up for a slow exit. Brake hard, take it slow, and wait to turn into a corner to best position you and the car to accelerate swiftly out of the turn.
|
|
null | false
| null |
Give me a list of the communes in Barolo DOCG
|
- Barolo
- La Morra
- Cherasco
- Verduno
- Roddi
- Grinzane Cavour
- Diano d'Alba
- Serralunga d'Alba
- Castiglione Falletto
- Monforte d'Alba
- Novello
|
|
null | false
| null |
What do you say to someone that is suicidal?
|
First, acknowledge their feelings. While they may be irrational, they are very real emotions. Tell them that you are worried about them and ask them to seek help. If they will not seek help, take steps to alert the appropriate people. Do not ignore any verbal signs. Do not keep it a secret. Ask them to contact emergency services immediately. The National Suicide Prevention Lifeline is 1-800-273-TALK
|
|
null | false
| null |
Give me a bulleted list of the five most recent vice presidents of the U.S.
|
. Kamala Harris
. Mike Pence
. Joe Biden
. Dick Cheney
. Al Gore
|
|
null | false
| null |
Grenfell (Canada 2016 Census population 1,099) is a town in Southern Saskatchewan, Canada. It is situated at the junction of Highway 47 and the Trans-Canada Highway 1 80 miles (130 km) east of Regina, the provincial capital. It is 15 miles (24 km) south of the Qu'Appelle Valley where Crooked Lake Provincial Park (at Crooked Lake) and Bird's Point Resort (at Round Lake) are popular beach destinations in summer and are accessed by Highway 47. European settlement from Ontario and the British Isles began in 1882 before the Canadian Pacific Railway reached the site of the town, and "the town's name honours Pasco du Pre Grenfell, a railway company official." The post office was established in 1883.
|
Extract all of the dates mentioned in this paragraph and list them using bullets in the format {Date} - {Description}
|
• 2016 - As of this date, the population of Grenfell, Saskatchewan was 1,099 people
• 1882 - European settlement of what became Grenfell, Saskatchewan began in this year.
• 1883 - This is when the post office of Grenfell was established.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.