paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Why Bengalis are famous for
|
1. They have sweetest language declared by UNESCO .
2. They are cultural
3. Bengalis are famous for their food
|
|
null | false
| null |
The Cuban Missile Crisis, also known as the October Crisis (of 1962) (Spanish: Crisis de Octubre) in Cuba, the Caribbean Crisis (Russian: Карибский кризис, tr. Karibsky krizis, IPA: [kɐˈrʲipskʲɪj ˈkrʲizʲɪs]) in Russia, or the Missile Scare, was a 35-day (October 16 – November 20, 1962) confrontation between the United States and the Soviet Union, which escalated into an international crisis when American deployments of missiles in Italy and Turkey were matched by Soviet deployments of similar ballistic missiles in Cuba. Despite the short time frame, the Cuban Missile Crisis remains a defining moment in national security and nuclear war preparation. The confrontation is often considered the closest the Cold War came to escalating into a full-scale nuclear war.
|
Given a reference text about the Cuban Missile Crisis, when did the crisis occur and how long did it last?
|
The Cuban Missile crises lasted 35 days, from October 16 to November 20, 1962.
|
null | false
| null |
The Walrus and the Whistleblower is a 2020 Canadian documentary film directed by Nathalie Bibeau. The film profiles Phil Demers, a former employee of Marineland who attempted to blow the whistle on allegedly inhumane treatment of animals at the institution.
According to a film industry newsletter, the film was made "in association with the Documentary Channel, and CBC Docs". The Documentary Channel's announcement about the film stated that Demers had "appeared four times on the Joe Rogan show, has testified before the Canadian Senate, and is being sued for $1.5 million for plotting to steal Smooshi, the walrus".
The film premiered as part of the 2020 Hot Docs Canadian International Documentary Festival. Due to the COVID-19 pandemic in Canada it was not screened theatrically, but premiered as part of the festival's online streaming component and aired on CBC Television on May 28 as part of the network's special Hot Docs at Home series of films from the festival.
In June 2020, the film was named as the winner of 2020 Rogers Audience Award, alongside the films 9/11 Kids, There's No Place Like This Place, Anyplace, First We Eat and The Forbidden Reel, and as Overall Favourite at Hot Docs. The Audience Award allowed the film to be fast tracked in the Academy Award for Best Documentary Feature category, "provided it meets all other criteria for eligibility". Northern Banner, a division of Raven Banner Entertainment, subsequently announced that it had acquired the rights to distribute the film in Canada, probably starting before year end. The US and international distribution rights went to Gravitas Ventures.
The film received a nomination for the Donald Brittain Award at the 9th Canadian Screen Awards in 2021.
|
How many awards did the film The Walrus and the Whistleblower win per this entry of information?
|
2. the 2020 Rogers Audience Award, and the Donald Brittain Award in 2021.
|
null | false
| null |
Oral hygiene is the practice of keeping one's mouth clean and free of disease and other problems (e.g. bad breath) by regular brushing of the teeth (dental hygiene) and cleaning between the teeth. It is important that oral hygiene be carried out on a regular basis to enable prevention of dental disease and bad breath. The most common types of dental disease are tooth decay (cavities, dental caries) and gum diseases, including gingivitis, and periodontitis.
General guidelines for adults suggest brushing at least twice a day with a fluoridated toothpaste: brushing last thing at night and at least on one other occasion. Cleaning between the teeth is called interdental cleaning and is as important as tooth brushing. This is because a toothbrush cannot reach between the teeth and therefore only removes about 50% of plaque from the surface of the teeth. There are many tools to clean between the teeth, including floss, tape and interdental brushes; it is up to each individual to choose which tool they prefer to use.
|
How often should you brush your teeth?
|
Oral Hygiene is important to maintain a healthy mouth free of disease. Keeping one's mouth clean avoids problems such as bad breath as well as tooth decay, gum disease and gingivitis. General guidelines for adults suggest brushing at least twice a day with fluoridated toothpaste. A good strategy is to brush your teeth in the morning and before you go to bed each night.
|
null | false
| null |
Why was Michael Jackson the "King of Pop"?
|
Michael Jackson dominated the pop music genre for decades. His music traveled worldwide and was a music sensation across the globe. He was famous since he was a young boy. He stunned the music world with trademark moves like the "moonwalk" and "robot" dance moves.
|
|
null | false
| 2
|
The training objective with adversarial neural network is different from the baseline model, as it includes the extra worker discriminator. Thus the new objective includes two parts, one being the negative log-likelihood from NER which is the same as the baseline, and the other being the negative the log-likelihood from the worker discriminator.
In order to obtain the negative log-likelihood of the worker discriminator, we use softmax to compute the probability of the actual worker $\bar{z}$ as well, which is defined by:
$$p(\bar{z}|\mathbf {X}, \mathbf {\bar{y}}) = \frac{\exp (\mathbf {o}^{\text{worker}}_{\bar{z}})}{\sum _{z} \exp (\mathbf {o}^{\text{worker}}_z)},$$ (Eq. 22)
where $z$ should enumerate all workers.
Based on the above definition of probability, our new objective is defined as follows:
$$\begin{split}
\text{R}(\Theta , \Theta ^{\prime }, \mathbf {X}, \mathbf {\bar{y}}, \bar{z}) &= \text{loss}(\Theta , \mathbf {X}, \mathbf {\bar{y}}) - \text{loss}(\Theta , \Theta ^{\prime }, \mathbf {X}) \\
\text{~~~~~~} &= -\log p(\mathbf {\bar{y}}|\mathbf {X}) + \log p(\bar{z}|\mathbf {X}, \mathbf {\bar{y}}),
\end{split}$$ (Eq. 23)
where $\Theta $ is the set of all model parameters related to NER, and $\Theta ^{\prime }$ is the set of the remaining parameters which are only related to the worker discriminator, $\mathbf {X}$ , $\mathbf {\bar{y}}$ and $\bar{z}$ are the input sentence, the crowd-annotated NE labels and the corresponding annotator for this annotation, respectively. It is worth noting that the parameters of the common Bi-LSTM are included in the set of $\Theta $ by definition.
In particular, our goal is not to simply minimize the new objective. Actually, we aim for a saddle point, finding the parameters $\Theta $ and $\Theta ^{\prime }$ satisfying the following conditions:
$$\begin{split}
\hat{\Theta } &= \mathop {arg~min}_{\Theta }\text{R}(\Theta , \Theta ^{\prime }, \mathbf {X}, \mathbf {\bar{y}}, \bar{z}) \\
\hat{\Theta }^{\prime } &= \mathop {arg~max}_{\Theta ^{\prime }}\text{R}(\hat{\Theta }, \Theta ^{\prime }, \mathbf {X}, \mathbf {\bar{y}}, \bar{z}) \\
\end{split}$$ (Eq. 24)
where the first equation aims to find one $\Theta $ that minimizes our new objective $\text{R}(\cdot )$ , and the second equation aims to find one $\Theta ^{\prime }$ maximizing the same objective.
Intuitively, the first equation of Formula 24 tries to minimize the NER loss, but at the same time maximize the worker discriminator loss by the shared parameters of the common Bi-LSTM. Thus the resulting features of common Bi-LSTM actually attempt to hurt the worker discriminator, which makes these features worker independent since they are unable to distinguish different workers. The second equation tries to minimize the worker discriminator loss by its own parameter $\Theta ^{\prime }$ .
We use the standard back-propagation method to train the model parameters, the same as the baseline model. In order to incorporate the term of the argmax part of Formula 24 , we follow the previous work of adversarial training BIBREF13 , BIBREF15 , BIBREF17 , by introducing a gradient reverse layer between the common Bi-LSTM and the CNN module, whose forward does nothing but the backward simply negates the gradients.
Thus the new objective includes two parts, one being the negative log-likelihood from NER which is the same as the baseline, and the other being the negative the log-likelihood from the worker discriminator.
|
What is the new training objective with an adversarial neural network?
|
The new objective includes two parts, one being the negative log-likelihood from NER which is the same as the baseline, and the other being the negative the log-likelihood from the worker discriminator.
|
null | false
| null |
Andfjorden is a fjord on the border of Nordland and Troms og Finnmark counties in Norway. It primarily flows between the large islands of Andøya and Senja. Grytøya and the smaller islands Bjarkøya and Krøttøya are located in the fjord. The main crossing is via the Andenes–Gryllefjord Ferry between Andøy and Senja municipalities. Other municipalities through which the fjord flows are Harstad, and Kvæfjord.
|
What are the islands bathed by the Andfjorden fjord described on this paragraph?
|
The Andfjorden fjord primarily flows between the Andøya and Senja islands. Grytøya and the smaller islands Bjarkøya and Krøttøya are located in the fjord.
|
null | false
| null |
Which of these items will be useful to pack when I’m getting ready for soccer practice: cleats, grass, guitar, water bottle, shin guards, helmet.
|
Cleats, shin guards, and a water bottle may be helpful things to pack when you’re getting ready for soccer practice. A helmet is not typically used in soccer, a guitar is a musical instrument, and grass is the material soccer is usually played on but not something you’ll need to pack.
|
|
null | false
| null |
What is an Exchange Traded Fund (ETF)
|
Its a collection of assets that trades on an exchange. They can include shares, bonds, commodities and currencies.
|
|
null | false
| null |
Tell me about the MGB.
|
The MGB is a two-door British sports car that was produced by the manufacturer MG from 1962 until 1980. The MGB was available in two bodystyles: a two seat convertible roadster known as the "tourer," and from 1967 a fastback 2+2 bodystyle known as the "GT." Most MGBs sold were of the convertible "tourer" variety. The MGB is one of the highest-volume British cars ever produced and was very popular both in the UK home market and overseas, particularly in America. The car was equipped with a 1798 cc four-cylinder gasoline engine and a four-speed manual transmission. All models were equipped with front disk brakes and rear drum brakes. The styling was modified throughout the long production run, with the most substantial change being the move from chrome bumpers to rubber bumpers midway through the 1974 model year to comply with changes in US federal automobile crash testing requirements.
|
|
null | false
| null |
What's a good way to make a baked potato?
|
Preheat your oven to 400 degrees. Poke some holes in the potato with a fork. Spread olive oil all over the potato, and sprinkle kosher salt on it. Place it directly on the oven rack and let it bake for about an hour. It will be ready when a knife can slide into it easily.
|
|
null | false
| null |
Which swimming stroke is the hardest?
|
Butterfly is considered to be the hardest of all swimming strokes. It engages full body including arms, core muscles, legs and feet. In particular, it strongly depends on core strength and legs. In order to recover the arms over the water the swimmer needs to execute a strong dolphin kick, which lifts the chest and rest of the body. Swimming butterfly requires a lot of energy. Amateur swimmers who can swim long distances with freestyle, breastroke or backstroke typically can cover only a short distance with butterfly e.g. 25 or 50 meters.
|
|
null | false
| null |
Identify which instrument is string or percussion: Kpanlogo, Shamisen
|
Shamisen is string, Kpanlogo is percussion.
|
|
null | false
| null |
Which of the following are periodic elements: Ti, N, Pt, Fl, Zt, 1L, 2y, O, F, V, Y
|
The following are periodic elements: Ti, N, Pt, Fl, O, F, V, Y
|
|
null | false
| null |
Gaya Island (Malay: Pulau Gaya) is a sizeable Malaysian island of 1,465 ha, just 10 minutes off Kota Kinabalu, Sabah and forms part of the Tunku Abdul Rahman National Park. Gaya Island derived its name from the Bajau word "Gayo" which means big and occupies an area of 15 km2 (3,700 acres) with an elevation of up to 300 metres. Several ridges rise more than 600 feet (180 m), peaking at 1,000 feet (300 m), along the backbone of Gaya Island.
Gaya is the largest island in the Tunku Abdul Rahman National Park, closest to downtown Kota Kinabalu (KK) and is covered with dense virgin, tropical forest. It has been a forest reserve since 1923. The island has 20 km of hiking trails and three 5 star resorts named Gayana Marine Resort, home to the Marine Ecology Research Centre, the neighboring Gaya Island Resort (by YTL Hotel Group), the Bunga Raya Island Resort on the north-east part of the island. Historically, Gaya Island was also the site of the English colonialist's British North Borneo Company's harbour, razed by the folk hero Mat Salleh on 9 July 1897.
In recent years, there has been a plan to turn Gaya Island into a city island and tourism hub. A cable car line has also been proposed before to connect with the city centre.
|
Given a reference text about Gaya Island, how many hiking trails and 5 star resorts are on the island?
|
Gaya Island has 20km of hiking trails and three 5 star resorts.
|
null | false
| 185
|
We conduct ablation studies for pre-training objectives, and the results can be seen in Table TABREF40. We observe that our model greatly benefits from the DAE objective for the zero-shot Chinese question generation task. The results also demonstrate that combining DAE and XAE can alleviate the spurious correlation issue and improves cross-lingual NLG.
As shown in Table TABREF41, we use the En-En-QG and Zh-Zh-QG tasks to analyze the effects of using different fine-tuning strategies. It can be observed that fine-tuning encoder parameters, our model obtain an impressive performance for both English and Chinese QG, which shows the strong cross-lingual transfer ability of our model. When fine-tuning all the parameters, the model achieves the best score for English QG, but it suffers a performance drop when evaluating on Chinese QG. We find that fine-tuning decoder hurts cross-lingual decoding, and the model learns to only decodes English words. For only fine-tuning decoder, the performance degrades by a large margin for both languages because of the underfitting issue, which indicates the necessity of fine-tuning encoder.
We examine whether low-resource NLG can benefit from cross-lingual transfer. We consider English as the rich-resource language, and conduct experiments for few-shot French/Chinese AS. Specifically, we first fine-tune Xnlg on the English AS data, and then fine-tune it on the French or Chinese AS data. We compare with the monolingual supervised model that Xnlg is only fine-tuned on the dataset of the target language. As shown in Figure FIGREF49, we can observe that the cross-lingual supervision improves performance for few-shot abstractive summarization. As the training data size becomes larger, the performance of two models is getting closer.
In terms of the pipeline model, we can observe that it suffers from the error propagation issue, especially when the source and target languages are all different from the training data
|
Does the pipeline model have any defects?
|
Yes, it suffers from the error propagation issue.
|
null | false
| null |
Stansfeld began his career as a civil servant in HM Customs and Excise. In 1877, he moved to Oxford, and later matriculated as a student at Exeter College, where he studied Medicine, attaining his BA in 1889, MA in 1893 and qualified as a doctor in 1897. In 1897 he started the Oxford Medical Misson in Bermondsey, London, where he founded the Stansfeld Oxford & Bermondsey Club in the same year. In 1909, he decided to take Holy Orders and studied part-time at Wycliffe Hall, Oxford. He was appointed Vicar of St Anne's Church, Thoburn Square, Bermondsey in 1910, and then returned to Oxford in 1912 as Rector of St Ebbe's, Oxford then a slum district until his retirement in 1926. Stansfeld died at Spelsbury, Oxfordshire, aged 85 in December 1939. He campaigned successfully for the erection of a children's playground and public baths in St Ebbe's. His philanthropy is commemorated by a blue plaque in Paradise Square, unveiled in June 2009.
|
When was the Stansfeld's death
|
Stansfeld died at Spelsbury, Oxfordshire, aged 85 in December 1939
|
null | false
| null |
In the series A Song of Ice and Fire, who is the founder of House Teague?
|
Torrence Teague
|
|
1909.08191
| false
| null |
Based on our theoretical results, we design a general framework for data exploration on scholarly data by semantic queries on knowledge graph embedding space. The main component in this framework is the conversion between the data exploration tasks and the semantic queries. We first outline the semantic query solutions to some traditional data exploration tasks, such as similar paper prediction and similar author prediction. We then propose a group of new interesting tasks, such as analogy query and analogy browsing, and discuss how they can be used in modern digital libraries.
We then propose a group of new interesting tasks, such as analogy query and analogy browsing, and discuss how they can be used in modern digital libraries.
|
What new interesting tasks can be solved based on the uncanny semantic structures of the embedding space?
|
The answers are shown as follows:
* analogy query
* analogy browsing
|
1910.04601
| false
| null |
RC-QED$^{\rm E}$ can be naturally solved by path ranking-based KGC (PRKGC), where the query triplet and the sampled paths correspond to a question and derivation steps, respectively. PRKGC meets our purposes because of its glassboxness: we can trace the derivation steps of the model easily.
RC-QED$^{\rm E}$ can be naturally solved by path ranking-based KGC (PRKGC), where the query triplet and the sampled paths correspond to a question and derivation steps, respectively.
|
What is the baseline?
|
The answers are shown as follows:
* path ranking-based KGC (PRKGC)
|
null | false
| null |
What is a list of knitting pattern abbreviations and their meanings?
|
Kn: knit n stitches
Pn: purl n stitches
S: slip stitch
YO: yarn over
CC: contrast color
Mn: make n stitches
Inc n: increase n stitches
KnT: knit n together
PnT: purl n together
RT: right twist (cabling)
LT: left twist (cabling)
PRT: purl right twist
PLT: purl left twist
(): special instructions
|
|
null | false
| null |
Provide me with a some ideas on healthy activities I can do in nature.
|
Getting out in nature alone or with friends is healthy both physically and psychologically. If you are looking for an activity that will require you to be out in nature you could consider bushwalking. Many national parks have walking trails of varying length and difficulty. Golf is also a healthy outdoor activity that you can do with a small group of friends or colleagues. If the beach is more your style then bodyboarding, surfing and snorkling are beach activities that you can do solo or in a group.
|
|
null | false
| null |
Billie Eilish Pirate Baird O'Connell (/ˈaɪlɪʃ/ EYE-lish; born December 18, 2001) is an American singer and songwriter. She first gained public attention in 2015 with her debut single "Ocean Eyes", written and produced by her brother Finneas O'Connell, with whom she collaborates on music and live shows. In 2017, she released her debut extended play (EP), titled Don't Smile at Me. Commercially successful, it reached the top 15 of record charts in numerous countries, including the US, UK, Canada, and Australia.
Eilish's first studio album, When We All Fall Asleep, Where Do We Go? (2019), debuted atop the US Billboard 200 and UK Albums Chart. It was one of the best-selling albums of the year, buoyed by the success of its fifth single "Bad Guy", Eilish's first number-one on the US Billboard Hot 100. This made her the first artist born in the 21st century to release a chart-topping single. The following year, Eilish performed the theme song "No Time to Die" for the James Bond film of the same name, which topped the UK Singles Chart and won the Academy Award for Best Original Song in 2022. Her subsequent singles "Everything I Wanted", "My Future", "Therefore I Am", and "Your Power" peaked in the top 10 in the US and UK. Her second studio album, Happier Than Ever (2021), topped charts in 25 countries.
|
Given this paragraph about Billie Eilish, tell me who produced her debut single, also tell me what is her chart-topping single?
|
Her brother Finneas O'Connell produced her debut single "Ocean Eyes", and her chart topping single is "Bad Guy"
|
null | false
| null |
What is Allsvenskan?
|
Allsvenskan is the first soccer league in Sweden. It has 16 teams and was founded 1924.
|
|
null | false
| null |
Why is Stephen A. Smith a great journalist?
|
Stephen A. Smith is considered a great journalist by many due to his entertaining delivery during television appearances as well as his fearlessness in addressing conflicting view points held by his colleagues. Furthermore, Stephen A. Smith has extensive experience as a popular writer for several newspapers as well as SLAM Magazine prior to his television career with ESPN.
|
|
null | false
| null |
Lata Mangeshkar ([ləˈt̪aː məŋˈɡeːʃkər] born Hema Mangeshkar; 28 September 1929 – 6 February 2022) was an Indian playback singer and occasional music composer. She is widely considered to have been one of the greatest and most influential singers in India and one of the world's greatest singers, according to BBC.[failed verification] Her contribution to the Indian music industry in a career spanning eight decades gained her honorific titles such as the "Queen of Melody", "Nightingale of India", and "Voice of the Millennium".
Lata recorded songs in over thirty-six Indian languages and a few foreign languages, though primarily in Marathi, Hindi, and Bengali. Her foreign languages included English, Indonesian, Russian, Dutch, Nepali, and Swahili. She received several accolades and honors throughout her career. In 1989, the Dadasaheb Phalke Award was bestowed on her by the Government of India. In 2001, in recognition of her contributions to the nation, she was awarded the Bharat Ratna, India's highest civilian honour; she is only the second female singer, after M. S. Subbulakshmi, to receive this honour. France conferred on her its highest civilian award, Officer of the National Order of the Legion of Honour, in 2007.
She was the recipient of three National Film Awards, 15 Bengal Film Journalists' Association Awards, four Filmfare Best Female Playback Awards, two Filmfare Special Awards, the Filmfare Lifetime Achievement Award and many more. In 1974, she was the first Indian playback singer to perform at the Royal Albert Hall in London, England. Her last recorded song was "Saugandh Mujhe Is Mitti ki" released on 30 March 2019 as a tribute to the Indian Army and the Nation.
|
List the languages in which Lata Mangeshkar has recorded songs.
|
Lata Mangeshkar has recorded songs in Marathi, Hindi, Bengali, English, Indonesian, Russian, Dutch, Nepali, and Swahili.
|
null | false
| null |
Due to climate change, deserts are expanding, while heat waves and wildfires are becoming more common.Increased warming in the Arctic has contributed to melting permafrost, glacial retreat and sea ice loss. Higher temperatures are also causing more intense storms, droughts, and other weather extremes. Rapid environmental change in mountains, coral reefs, and the Arctic is forcing many species to relocate or become extinct.Even if efforts to minimise future warming are successful, some effects will continue for centuries. These include ocean heating, ocean acidification and sea level rise.
Climate change threatens people with increased flooding, extreme heat, increased food and water scarcity, more disease, and economic loss. Human migration and conflict can also be a result.
|
What are the main impacts to the earth as a result of climate change.
|
More intense storms and droughts; Heat waves and wild fires are more common; Extinction of animal species; Increased food and water scarcity
|
null | false
| null |
Who is Sachin Tendulkar?
|
Sachin Ramesh Tendulkar is an Indian former international cricketer who captained the Indian national team. He is regarded as one of the greatest batsmen in the history of cricket. He is the all-time highest run-scorer in both ODI and Test cricket with more than 18,000 runs and 15,000 runs, respectively. He also holds the record for receiving the most man-of-the-match awards in international cricket.[
|
|
1807.09671
| false
| null |
The turkers are asked to indicate their preference for system A or B based on the semantic resemblance to the human summary on a 5-Likert scale (`Strongly preferred A', `Slightly preferred A', `No preference', `Slightly preferred B', `Strongly preferred B'). They are rewarded $0.04 per task. We use two strategies to control the quality of the human evaluation. First, we require the turkers to have a HIT approval rate of 90% or above. Second, we insert some quality checkpoints by asking the turkers to compare two summaries of same text content but in different sentence orders. Turkers who did not pass these tests are filtered out. Due to budget constraints, we conduct pairwise comparisons for three systems. The total number of comparisons is 3 system-system pairs INLINEFORM0 5 turkers INLINEFORM1 (36 tasks INLINEFORM2 1 human summaries for Eng + 44 INLINEFORM3 2 for Stat2015 + 48 INLINEFORM4 2 for Stat2016 + 46 INLINEFORM5 2 for CS2016 + 3 INLINEFORM6 8 for camera + 3 INLINEFORM7 5 for movie + 3 INLINEFORM8 2 for peer + 50 INLINEFORM9 4 for DUC04) = 8,355. The number of tasks for each corpus is shown in Table TABREF14 . To elaborate as an example, for Stat2015, there are 22 lectures and 2 prompts for each lecture. Therefore, there are 44 tasks (22 INLINEFORM10 2) in total. In addition, there are 2 human summaries for each task. We selected three competitive systems (SumBasic, ILP, and ILP+MC) and therefore we have 3 system-system pairs (ILP+MC vs. ILP, ILP+MC vs. SumBasic, and ILP vs. SumBasic) for each task and each human summary. Therefore, we have 44 INLINEFORM11 2 INLINEFORM12 3=264 HITs for Stat2015. Each HIT will be done by 5 different turkers, resulting in 264 INLINEFORM13 5=1,320 comparisons. In total, 306 unique turkers were recruited and on average 27.3 of HITs were completed by one turker. The distribution of the human preference scores is shown in Fig. FIGREF34 .
Therefore, there are 44 tasks (22 INLINEFORM10 2) in total. In addition, there are 2 human summaries for each task. We selected three competitive systems (SumBasic, ILP, and ILP+MC) and therefore we have 3 system-system pairs (ILP+MC vs. ILP, ILP+MC vs. SumBasic, and ILP vs. SumBasic) for each task and each human summary.
|
Do they build one model per topic or on all topics?
|
One model per topic.
|
null | false
| 20
|
We present our experience in applying distributional semantics (neural word embeddings) to the problem of representing and clustering documents in a bilingual comparable corpus. Our data is a collection of Russian and Ukrainian academic texts, for which topics are their academic fields. In order to build language-independent semantic representations of these documents, we train neural distributional models on monolingual corpora and learn the optimal linear transformation of vectors from one language to another. The resulting vectors are then used to produce `semantic fingerprints' of documents, serving as input to a clustering algorithm. The presented method is compared to several baselines including `orthographic translation' with Levenshtein edit distance and outperforms them by a large margin. We also show that language-independent `semantic fingerprints' are superior to multi-lingual clustering algorithms proposed in the previous work, at the same time requiring less linguistic resources.
We present our experience in applying distributional semantics (neural word embeddings) to the problem of representing and clustering documents in a bilingual comparable corpus.
|
What experience do the authors present in the paper?
|
Experience in applying distributional semantics (neural word embeddings) to the problem of representing and clustering documents in a bilingual comparable corpus.
|
null | false
| null |
Where Is The Charging Port On A Tesla Model Y?
|
The Tesla Model Y charging port is located on the left side of the car near the rear light cluster. It's straightforward to charge the Tesla Model Y. All you need to do is plug the CCS 2 into your charging inlet, and you're ready to charge. Be aware that many public charging networks require you to have RFID cards or download their apps before you can use them.
|
|
null | false
| null |
What are some uses for a buff?
|
A buff can be used to keep the neck warm, cover the head, wrap around the forehead like a headband, it can be used as a snot rag for runny noses, as a napkin during meals, and as a wash cloth or dish rag when camping.
|
|
null | false
| null |
You've bought a condominium in the city, your first home, with more rooms than your previous apartments. It doesn't have a garage or any yard. Which of the following items would be handy? A lawnmower, a bus pass, a security system, more bathroom towels, wall art, gardening tools, a washer and dryer set, a car
|
You would want to get a bus pass so that you can get around the city, since you don't have a car. Though a lot might be walkable, there is always something that is far enough away.
You would want to buy a home security system; your condominium has fewer inherent security features than a large apartment complex, so it would be helpful to have a bit more peace of mind.
You would want to buy more bathroom towels since your condominium is larger and likely has more bathrooms that need towels.
You would want a washer and dryer set if you don't already have one. You need a way to wash your clothes at home now that you aren't in an apartment, and it would be hard to carry your laundry on the bus.
You might want some wall art; now that you own your own home, you can hang things on the wall without worrying about your lease agreement.
You don't need gardening tools or a lawnmower, since you don't have a yard.
You don't need a car, since you don't have a garage, and you live in the city.
|
|
null | false
| null |
What is a music chord?
|
A chord in music is any harmony of two or more unique notes. In western music, we tend to focus on chords comprised of three unique notes characterized as a triad. The three notes are the root note plus the third and fifth intervals above the root in a diatonic scale of the key of the desired chord. For instance, if we are creating a C Major Chord, we will use the root (C), the third interval (E), and the fifth interval (G) in the C Major scale.
|
|
null | false
| null |
How do I become a pilot?
|
In the U.S. are several requirements to become a pilot. These vary based on what type of pilot you want to become (e.g., private or commercial), what aircraft you want to fly, and under what conditions. Generally speaking, all pilots must get a medical certificate to prove they're in good enough health to fly, pass the FAA written exam, complete sufficient training hours, then pass a check ride. Do all this, and congratulations, you're now a pilot!
|
|
null | false
| null |
Thomas Attewell (7 November 1869 – 6 July 1937) was an English first-class cricketer. Attewell was a right-handed batsman who bowled right-arm medium pace. He was born at Keyworth, Nottinghamshire.
His brother William played first-class cricket for Nottinghamshire and the Marylebone Cricket Club, as well as Test cricket for England. His cousin Walter Attewell played first-class cricket for Nottinghamshire.
|
Did Thomas Attewell bat with a right or left hand?
|
right hand
|
null | false
| 374
|
The ability for a machine to converse with human in a natural and coherent manner is one of challenging goals in AI and natural language understanding. One problem in chat-oriented human-machine dialog system is to reply a message within conversation contexts. Existing methods can be divided into two categories: retrieval-based methods BIBREF0 , BIBREF1 , BIBREF2 and generation based methods BIBREF3 . The former is to rank a list of candidates and select a good response. For the latter, encoder-decoder framework BIBREF3 or statistical translation method BIBREF4 are usually used to generate a response. It is not easy to main the fluency of the generated texts.
Ubuntu dialogue corpus BIBREF5 is the public largest unstructured multi-turns dialogue corpus which consists of about one-million two-person conversations. The size of the corpus makes it attractive for the exploration of deep neural network modeling in the context of dialogue systems. Most deep neural networks use word embedding as the first layer. They either use fixed pre-trained word embedding vectors generated on a large text corpus or learn word embedding for the specific task. The former is lack of flexibility of domain adaptation. The latter requires a very large training corpus and significantly increases model training time. Word out-of-vocabulary issue occurs for both cases. Ubuntu dialogue corpus also contains many technical words (e.g. “ctrl+alt+f1", “/dev/sdb1"). The ubuntu corpus (V2) contains 823057 unique tokens whereas only 22% tokens occur in the pre-built GloVe word vectors. Although character-level representation which models sub-word morphologies can alleviate this problem to some extent BIBREF6 , BIBREF7 , BIBREF8 , character-level representation still have limitations: learn only morphological and orthographic similarity, other than semantic similarity (e.g. `car' and `bmw') and it cannot be applied to Asian languages (e.g. Chinese characters).
In this paper, we generate word embedding vectors on the training corpus based on word2vec BIBREF9 . Then we propose an algorithm to combine the generated one with the pre-trained word embedding vectors on a large general text corpus based on vector concatenation. The new word representation maintains information learned from both general text corpus and task-domain. The nice property of the algorithm is simplicity and little extra computational cost will be added. It can address word out-of-vocabulary issue effectively. This method can be applied to most NLP deep neural network models and is language-independent. We integrated our methods with ESIM(baseline model) BIBREF10 . The experimental results have shown that the proposed method has significantly improved the performance of original ESIM model and obtained state-of-the-art results on both Ubuntu Dialogue Corpus and Douban Conversation Corpus BIBREF11 . On Ubuntu Dialogue Corpus (V2), the improvement to the previous best baseline model (single) on INLINEFORM0 is 3.8% and our ensemble model on INLINEFORM1 is 75.9%. On Douban Conversation Corpus, the improvement to the previous best model (single) on INLINEFORM2 is 3.6%.
Our contributions in this paper are summarized below:
The rest paper is organized as follows. In Section SECREF2 , we review the related work. In Section SECREF3 we provide an overview of ESIM (baseline) model and describe our methods to address out-of-vocabulary issues. In Section SECREF4 , we conduct extensive experiments to show the effectiveness of the proposed method. Finally we conclude with remarks and summarize our findings and outline future research directions.
Then we propose an algorithm to combine the generated one with the pre-trained word embedding vectors on a large general text corpus based on vector concatenation.
|
What algorithm do they propose?
|
An algorithm to combine the generated word embedding vectors on a large general text corpus based on vector concatenation.
|
1912.00582
| false
| null |
To create minimal pairs exemplifying a wide array of linguistic contrasts, it is necessary to artificially generate all datasets. This ensures both that we have sufficient unacceptable examples, and that the data is fully controlled, allowing for repeated isolation of a single linguistic phenomenon in each paradigm BIBREF30. The data generation scripts use a basic template to create each paradigm, pulling from a vocabulary of over 3000 words annotated for morphological, syntactic, and semantic features needed to create grammatical and semantically felicitous sentences. Examples SECREF6 and SECREF6 show one such template for the `acceptable' and `unacceptable' sentences within a pair: the sole difference between them is the underlined word, which differs only in whether the anaphor agrees in number with its antecedent. Our generation codebase and scripts are freely available.
This generation procedure is not without limitations, and despite the very detailed vocabulary we use, implausible sentences are occasionally generated (e.g., `Sam ran around some glaciers'). In these cases, though, both the acceptable and unacceptable sentences will be equally implausible given world knowledge, so any difference in the probability assigned to them is still due to the intended grammatical contrast.
To create minimal pairs exemplifying a wide array of linguistic contrasts, it is necessary to artificially generate all datasets. This ensures both that we have sufficient unacceptable examples, and that the data is fully controlled, allowing for repeated isolation of a single linguistic phenomenon in each paradigm BIBREF30. The data generation scripts use a basic template to create each paradigm, pulling from a vocabulary of over 3000 words annotated for morphological, syntactic, and semantic features needed to create grammatical and semantically felicitous sentences. Examples SECREF6 and SECREF6 show one such template for the `acceptable' and `unacceptable' sentences within a pair: the sole difference between them is the underlined word, which differs only in whether the anaphor agrees in number with its antecedent. Our generation codebase and scripts are freely available.
This generation procedure is not without limitations, and despite the very detailed vocabulary we use, implausible sentences are occasionally generated (e.g., `Sam ran around some glaciers'). In these cases, though, both the acceptable and unacceptable sentences will be equally implausible given world knowledge, so any difference in the probability assigned to them is still due to the intended grammatical contrast.
|
How is the data automatically generated?
|
The answers are shown as follows:
* The data generation scripts use a basic template to create each paradigm, pulling from a vocabulary of over 3000 words annotated for morphological, syntactic, and semantic features needed to create grammatical and semantically felicitous sentences.
|
null | false
| null |
Name some of the bravolebrities from The Real Housewives of Dubai
|
Caroline Brooks, Caroline Stanbury, Chanel Ayan, Dr. Sara Al Madani, Lesa Milan Hall and Nina Ali
|
|
null | false
| null |
What is the triple crown of baseball?
|
The triple crown is a rare feat accomplished in baseball when an individual player finishes the season with the American or National league lead in Home Runs, RBIs (runs batted in) and Batting Average. There have been 14 triple crown winners all time in baseball.
|
|
null | false
| null |
Who were Jaehaerys I Targaryen's children?
|
Prince Aegon, Princess Daenerys, Prince Aemon, Prince Baelon, Princess Alyssa, Septa Maegelle, Archmaester Vaegon, Princess Daella, Princess Saera, Princess Viserra, Prince Gaemon, Prince Valerion, Princess Gael
|
|
null | false
| null |
What is EDM?
|
EDM is short for electronic dance music. It is a genre that is made up or electronic musical melodies and is sometimes referred to as club or dance music.
|
|
null | false
| null |
What is a pineapple?
|
The pineapple is a fruit. It is native to South America, Central America and the Caribbean. The word "pineapple" came from European explorers, who thought the fruit looked similar to a pine cone. The pineapple is also a tropical plant and it must not be kept under 10 degrees. It takes a long time for the pineapple to grow. It mainly depends on the location and the growing conditions it gets. If the temperature is warm, the pineapple will grow faster. On average, it takes about two to three years to grow. If the fruit travels by boat, the fruit must be picked before ripening, since the trip takes a long time. Sometimes, pineapple has to travel by boat to be accessible to all. It is different for air travel. The pineapple can be picked when it is grown, since travel time is short. In most countries, they call the pineapple ananas.
|
|
null | false
| 36
|
In this paper, we present a deep neural network model for the task of fine-grained named entity classification using ELMo embeddings and Wikidata. The proposed model learns representations for entity mentions based on its context and incorporates the rich structure of Wikidata to augment these labels into finer-grained subtypes. We can see comparisons of our model made on Wiki(gold) in Table TABREF20 . We note that the model performs similarly to existing systems without being trained or tuned on that particular dataset. Future work may include refining the clustering method described in Section 2.2 to extend to types other than person, location, organization, and also to include disambiguation of entity types.
Future work may include refining the clustering method described in Section 2.2 to extend to types other than person, location, organization, and also to include disambiguation of entity types.
|
What is their future plan?
|
Future work may include refining the clustering method described in Section 2.2 to extend to types other than person, location, organization, and also to include disambiguation of entity types.
|
null | false
| null |
Tell me whether each of these ski resorts are in Colorado or California: Big Bear, Mammoth, North Star, Vail, Aspen, Telluride
|
Big Bear: California
Mammoth: California
North Star: California
Vail: Colorado
Aspen: Colorado
Telluride: Colorado
|
|
null | false
| null |
How do you juggle?
|
Juggling is the act of continually throwing and catching three or more objects, usually balls. Humans juggle for entertainment, recreation, exercise, and fun. Some jugglers are professionally hired to perform at birthday parties, festivals, and events. You juggle by throwing balls from hand to hand, releasing a ball before catching another.
|
|
null | false
| null |
MS Dhoni, is a former Indian cricketer and captain of the Indian national team in limited-overs formats from 2007 to 2017 and in Test cricket from 2008 to 2014, who plays as a Wicket-keeper-Batsman. He is also the current captain of Chennai Super Kings in the Indian Premier League. Under his captaincy, India won the 2007 ICC World Twenty20, the 2011 Cricket World Cup, and the 2013 ICC Champions Trophy, the most by any captain. He also led India to victory in the 2010 and 2016 Asia Cup. Additionally, under his leadership, India won the 2010 and 2011 ICC Test Mace and 2013 ICC ODI Championship. Dhoni is a right-handed wicket-keeper batsman known for his calm captaincy and his ability to finish matches in tight situations. He has scored 17,266 runs in International Cricket (including 10,000 plus runs in ODI Internationals) and is considered one of the best finishers in the game, as well as one of the greatest wicket-keepers and captains in the history of cricket.
|
From the below text about MS Dhoni, tell me what they were most famous for?
|
MS Dhoni is a former cricketer and captain of the Indian national team. He is most famously known for leading India to victory in 2007 ICC World Twenty20, the 2011 Cricket World Cup and the 2013 ICC Champions Trophy. He is also known for his calm captaincy and considered as one of the best finishers and greatest wicket-keepers in the history of the game.
|
null | false
| null |
Write the introduction to a story about a rocket ship on its way to a distant galaxy
|
Deep space. It had been so long since they last had a reference point. How long? Was it days? Weeks? Months? The crew didn't know. They longed for a feature, some contact; a planet, even a small asteroid would do. Deep space is a lonely place.
But the thought of the mission and the uniqueness of their experience kept them going. They had travelled further than any crew before. They had left the solar system and were well on their way to Andromeda. They would be the first, they would be the pioneers who discovered new lands for the human race.
Days continued to pass - or was it weeks again? They started to hear strange sounds from the ship. Was it mechanical issues? Did they have a visitor? Or was it just their minds turning on them?
|
|
null | false
| 176
|
The pre-trained language model, BERT BIBREF0 has led to a big breakthrough in various kinds of natural language understanding tasks. Ideally, people can start from a pre-trained BERT checkpoint and fine-tune it on a specific downstream task. However, the original BERT models are memory-exhaustive and latency-prohibitive to be served in embedded devices or CPU-based online environments. As the memory and latency constraints vary in different scenarios, the pre-trained BERT model should be adaptive to different requirements with accuracy retained to the largest extent. Existing BERT-oriented model compression solutions largely depend on knowledge distillation BIBREF1, which is inefficient and resource-consuming because a large training corpus is required to learn the behaviors of a teacher. For example, DistilBERT BIBREF2 is re-trained on the same corpus as pre-training a vanilla BERT from scratch; and TinyBERT BIBREF3 utilizes expensive data augmentation to fit the distillation target. The costs of these model compression methods are as large as pre-training and unaffordable for low-resource settings. Therefore, it is straight-forward to ask, can we design a lightweight method to generate adaptive models with comparable accuracy using significantly less time and resource consumption? In this paper, we propose LadaBERT (Lightweight adaptation of BERT through hybrid model compression) to tackle the raised questions. Specifically, LadaBERT is based on an iterative hybrid model compression framework consisting of weighting pruning, matrix factorization and knowledge distillation. Initially, the architecture and weights of student model are inherited from the BERT teacher. In each iteration, the student model is first compressed by a small ratio based on weight pruning and matrix factorization, and is then fine-tuned under the guidance of teacher model through knowledge distillation. Because weight pruning and matrix factorization help to generate better initial and intermediate status
in the knowledge distillation iterations, the accuracy and efficiency of model compression can be greatly improved.
We conduct extensive experiments on five public datasets of natural language understanding. As an example, the performance comparison of LadaBERT and state-of-the-art models on MNLI-m dataset is illustrated in Figure FIGREF1. We can see that LadaBERT outperforms other BERT-oriented model compression baselines at various model compression ratios. Especially, LadaBERT-1 outperforms BERT-PKD significantly under $2.5\times $ compression ratio, and LadaBERT-3 outperforms TinyBERT under $7.5\times $ compression ratio while the training speed is accelerated by an order of magnitude.
The rest of this paper is organized as follows. First, we summarizes the related works of model compression and their applications to BERT in Section SECREF2. Then, the methodology of LadaBERT is introduced in Section SECREF3, and experimental results are presented in Section SECREF4. At last, we conclude this work and discuss future works in Section SECREF5.
Specifically, LadaBERT is based on an iterative hybrid model compression framework consisting of weighting pruning, matrix factorization and knowledge distillation.
|
What does the LadaBERT consist of?
|
It consists of weighting pruning, matrix factorization and knowledge distillation.
|
null | false
| null |
Danny Cohen introduced the terms big-endian and little-endian into computer science for data ordering in an Internet Experiment Note published in 1980.
The adjective endian has its origin in the writings of 18th century Anglo-Irish writer Jonathan Swift. In the 1726 novel Gulliver's Travels, he portrays the conflict between sects of Lilliputians divided into those breaking the shell of a boiled egg from the big end or from the little end. Because the emperor's son had cut his finger while opening an egg from the big end, doing so was prohibited by an imperial edict; those who rebelled and did so were called "Big-Endians" (Swift did not use the term Little-Endians in the work). Cohen makes the connection to Gulliver's Travels explicit in the appendix to his 1980 note.
|
In computer science, where did the term big-endian originate?
|
Danny Cohen introduced the terms big-endian in an Internet Experiment Note published in 1980. In the appendix to the 1980 note he connected the idea to Gulliver's Travels.
|
null | false
| null |
The 1949 Oregon Webfoots football team represented the University of Oregon in the Pacific Coast Conference (PCC) during the 1949 college football season. In their third season under head coach Jim Aiken, the Webfoots compiled a 4–6 record (2–5 against PCC opponents), finished in a tie for sixth place in the PCC, and outscored their opponents, 250 to 219. The team played its home games at Hayward Field in Eugene, Oregon.
|
What was the football team that played at the University of Oregon in 1949?
|
The team that played football at the University of Oregon in 1949 was the Oregon Webfoots. The coach at the time was Jim Aiken, and the Webfoots held a 4-6 record (2-5 against PCC opponents), and finished with a tie for sixth place. Their home games were played at Hayward Field in Eugene, Oregon.
|
null | false
| 144
|
To verify the proposed approach, we conducted a pilot study on Amazon Mechanical Turk using data from TAC2008 BIBREF36 . We collected importance estimates for 474 propositions extracted from the first three clusters using both task designs. Each Likert-scale task was assigned to 5 different workers and awarded $0.06. For comparison tasks, we also collected 5 labels each, paid $0.05 and sampled around 7% of all possible pairs. We submitted them in batches of 100 pairs and selected pairs for subsequent batches based on the confidence of the TrueSkill model.
Following the observations of Lloret.2013, we established several measures for quality control. First, we restricted our tasks to workers from the US with an approval rate of at least 95%. Second, we identified low quality workers by measuring the correlation of each worker's Likert-scores with the average of the other four scores. The worst workers (at most 5% of all labels) were removed.
In addition, we included trap sentences, similar as in BIBREF13 , in around 80 of the tasks. In contrast to Lloret et al.'s findings, both an obvious trap sentence (This sentence is not important) and a less obvious but unimportant one (Barack Obama graduated from Harvard Law) were consistently labeled as unimportant (1.08 and 1.14), indicating that the workers did the task properly.
For Likert-scale tasks, we follow Snow.2008 and calculate agreement as the average Pearson correlation of a worker's Likert-score with the average score of the remaining workers. This measure is less strict than exact label agreement and can account for close labels and high- or low-scoring workers. We observe a correlation of 0.81, indicating substantial agreement. For comparisons, the majority agreement is 0.73. To further examine the reliability of the collected data, we followed the approach of Kiritchenko.2016 and simply repeated the crowdsourcing for one of the three topics. Between the importance estimates calculated from the first and second run, we found a Pearson correlation of 0.82 (Spearman 0.78) for Likert-scale tasks and 0.69 (Spearman 0.66) for comparison tasks. This shows that the approach, despite the subjectiveness of the task, allows us to collect reliable annotations.
In addition to the reliability studies, we extrinsically evaluated the annotations in the task of summary evaluation. For each of the 58 peer summaries in TAC2008, we calculated a score as the sum of the importance estimates of the propositions it contains. Table TABREF13 shows how these peer scores, averaged over the three topics, correlate with the manual responsiveness scores assigned during TAC in comparison to ROUGE-2 and Pyramid scores. The results demonstrate that with both task designs, we obtain importance annotations that are similarly useful for summary evaluation as pyramid annotations or gold-standard summaries (used for ROUGE).
Based on the pilot study, we conclude that the proposed crowdsourcing scheme allows us to obtain proper importance annotations for propositions. As workers are not required to read all documents, the annotation is much more efficient and scalable as with traditional methods.
Following the observations of Lloret et al. (2013), we established several measures for quality control. First, we restricted our tasks to workers from the US with an approval rate of at least 95%. Second, we identified low quality workers by measuring the correlation of each worker’s Likert-scores with the average of the other four scores. The worst workers (at most 5% of all labels) were removed. In addition, we included trap sentences, similar as in (Lloret et al., 2013), in around 80 of the tasks. In contrast to Lloret et al.’s findings, both an obvious trap sentence (This sentence is not important) and a less obvious but unimportant one (Barack Obama graduated from Harvard Law) were consistently labeled as unimportant (1.08 and 1.14), indicating that the workers did the task properly.
|
How do they control the quality of their pilot study?
|
They only employ the US workers with an approval rate of at least 95% and the worst of them are removed. They also set trap sentences in around 80 of the tasks.
|
null | false
| null |
What are five totally distinct, creative ways to describe the usefulness of empty time?
|
1. "Empty time is like an open window, allowing the possibilities of the future to unfold."
2. "Empty time is like a blank canvas, ready to be filled with new ideas and opportunities."
3. "Empty time is like a fresh start, allowing us to make the most of the present moment."
4. "Empty time is like a blank page, waiting to be written upon."
5. "Empty time is like a vacuum, opening up a space for new ideas to come in."
|
|
1804.10686
| true
| null |
We use two different unsupervised approaches for word sense disambiguation. The first, called `sparse model', uses a straightforward sparse vector space model, as widely used in Information Retrieval, to represent contexts and synsets. The second, called `dense model', represents synsets and contexts in a dense, low-dimensional space by averaging word embeddings.
In the synset embeddings model approach, we follow SenseGram BIBREF14 and apply it to the synsets induced from a graph of synonyms. We transform every synset into its dense vector representation by averaging the word embeddings corresponding to each constituent word: DISPLAYFORM0
We observe that the SenseGram-based approach for word sense disambiguation yields substantially better results in every case (Table TABREF25 ). The primary reason for that is the implicit handling of similar words due to the averaging of dense word vectors for semantically related words. Thus, we recommend using the dense approach in further studies. Although the AdaGram approach trained on a large text corpus showed better results according to the weighted average, this result does not transfer to languages with less available corpus size.
We use two different unsupervised approaches for word sense disambiguation.
The second, called `dense model', represents synsets and contexts in a dense, low-dimensional space by averaging word embeddings.
In the synset embeddings model approach, we follow SenseGram BIBREF14 and apply it to the synsets induced from a graph of synonyms.
We observe that the SenseGram-based approach for word sense disambiguation yields substantially better results in every case (Table TABREF25 ). The primary reason for that is the implicit handling of similar words due to the averaging of dense word vectors for semantically related words.
|
Do the authors offer any hypothesis about why the dense mode outperformed the sparse one?
|
Yes.
|
1904.12535
| false
| null |
Prior to the SAOKE data set, an annotated data set for OIE tasks with 3,200 sentences in 2 domains was released in BIBREF20 to evaluate OIE algorithms, in which the data set was said BIBREF20 “13 times larger than the previous largest annotated Open IE corpus”. The SAOKE data set is 16 times larger than the data set in BIBREF20 . To the best of our knowledge, SAOKE data set is the largest publicly available human labeled data set for OIE tasks. Furthermore, the data set released in BIBREF20 was generated from a QA-SRL data set BIBREF21 , which indicates that the data set only contains facts that can be discovered by SRL (Semantic Role Labeling) algorithms, and thus is biased, whereas the SAOKE data set is not biased to an algorithm. Finally, the SAOKE data set contains sentences and facts from a large number of domains.
Prior to the SAOKE data set, an annotated data set for OIE tasks with 3,200 sentences in 2 domains was released in BIBREF20 to evaluate OIE algorithms, in which the data set was said BIBREF20 “13 times larger than the previous largest annotated Open IE corpus”.
|
What's the size of the previous largest OpenIE dataset?
|
The answers are shown as follows:
* 3,200 sentences
|
null | false
| null |
Why do people love the FIFA world cup?
|
The FIFA world cup is loved by all because it is the one sporting event that brings national communities to rally around a common sport. Futbol (football) or soccer as it is known in the United States has a way to unite and nations and cheer for their favorite national team. It is arguably the one sport that has an equal playing field. All one needs is a ball and players for any to play.
|
|
null | false
| 385
|
Neural sequence models BIBREF0, BIBREF1 typically generate outputs in an autoregressive left-to-right manner. These models have been successfully applied to a range of task, for example machine translation BIBREF2. They often rely on an encoder that processes the source sequence, and a decoder that generates the output sequence conditioned on the output of the encoder. The decoder will typically generate the target sequence one token at a time, in an autoregressive left-to-right fashion.
Recently, research in insertion-based non- or partially- autoregressive models has spiked BIBREF3, BIBREF4, BIBREF5, BIBREF6. These model are more flexible than their autoregressive counterparts. They can generate sequences in any order, and can benefit from parallel token generation. They can learn complex orderings (e.g., tree orderings) and may be more applicable to task like cloze question answering BIBREF6 and text simplification, where the order of generation is not naturally left to right, and the source sequence might not be fully observed. One recently proposed approach is the Insertion Transformer BIBREF3, where the target sequence is modelled with insertion-edits. As opposed to traditional sequence-to-sequence models, the Insertion Transformer can generate sequences in any arbitrary order, where left-to-right is a special case. Additionally, during inference, the model is endowed with parallel token generation capabilities. The Insertion Transformer can be trained to follow a soft balanced binary tree order, thus allowing the model to generate $n$ tokens in $O(\log _2 n)$ iterations.
In this work we propose to generalize this insertion-based framework, we present a framework which emits both insertions and deletions. Our Insertion-Deletion Transformer consists of an insertion phase and a deletion phase that are executed iteratively. The insertion phase follows the typical insertion-based framework BIBREF3. However, in the deletion phase, we teach the model to do deletions with on-policy training. We sample an input sequence on-policy from the insertion model (with on-policy insertion errors), and teach the deletion model its appropriate deletions.
This insertion-deletion framework allows for flexible sequence generation, parallel token generation and text editing. In a conventional insertion-based model, if the model makes a mistake during generation, this cannot be undone. Introducing the deletion phase makes it possible to undo the mistakes made by the insertion model, since it is trained on the on-policy errors of the insertion phase. The deletion model extension also enables the framework to efficiently handle tasks like text simplification and style transfer by starting the decoding process from the original source sequence.
A concurrent work was recently proposed, called the Levenshtein Transformer (LevT) BIBREF7. The LevT framework also generates sequences with insertion and deletion operations. Our approach has some important distinctions and can be seen as a simplified version, for both the architecture and the training algorithm. The training algorithm used in the LevT framework uses an expert policy. This expert policy requires dynamic programming to minimize Levenshtein distance between the current input and the target. This approach was also explored by BIBREF8, BIBREF9. Their learning algorithm arguably adds more complexity than needed over the simple on-policy method we propose. The LevT framework consists of three stages, first the number of tokens to be inserted is predicted, then the actual tokens are predicted, and finally the deletion actions are emitted. The extra classifier to predict the number of tokens needed to be inserted adds an additional Transformer pass to each generation step. In practice, it is also unclear whether the LevT exhibits speedups over an insertion-based model following a balanced binary tree order. In contrast, our Insertion-Deletion framework only has one insertion phase and one deletion phase, without the need to predict the number of tokens needed to be inserted. This greatly simplifies the model architecture, training procedure and inference runtime.
An alternative approach for text editing is proposed by BIBREF10, which they dub Deliberation Networks. This work also acknowledges the potential benefits from post-editing output sequences and proposes a two-phase decoding framework to facilitate this.
In this paper, we present the insertion-deletion framework as a proof of concept by applying it to two synthetic character-based translation tasks and showing it can significantly increase the BLEU score over the insertion-only framework.
In this paper, we present the insertion-deletion framework as a proof of concept by applying it to two synthetic character-based translation tasks and showing it can significantly increase the BLEU score over the insertion-only framework.
|
What do they present the insertion-deletion framework as?
|
A proof of concept.
|
null | false
| null |
What stroller should I buy for my newborn baby?
|
The stroller that's the best fit for you depends on what characteristics you value. Example characteristics can be: affordability, ease of pushing and steering, ease of folding, ability to grow with your family, aesthetics, weight, and availability of high-end features like a bassinet attachment or large storage basket.
If you care about affordability, ease of folding, and ease of pushing and steering, the Baby Jogger City Mini GT2 stroller is a great option. If you care about the ability of the stroller to growth with your family, ease of pushing and steering, and high-end features, the Uppababy Vista V2 stroller is another popular option. If you have twins, the Bugaboo Donkey 5 stroller could be a great fit with its side-by-side double configuration.
|
|
null | false
| null |
Singapore consists of 63 islands, including the main island, Pulau Ujong. There are two man-made connections to Johor, Malaysia: the Johor–Singapore Causeway in the north and the Tuas Second Link in the west. Jurong Island, Pulau Tekong, Pulau Ubin and Sentosa are the largest of Singapore's smaller islands. The highest natural point is Bukit Timah Hill at 163.63 m (537 ft). Under British rule, Christmas Island and the Cocos Islands were part of Singapore, and both were transferred to Australia in 1957. Pedra Branca is the nation's easternmost point.
Land reclamation projects have increased Singapore's land area from 580 km2 (220 sq mi) in the 1960s to 710 km2 (270 sq mi) by 2015, an increase of some 22% (130 km2). The country is projected to reclaim another 56 km2 (20 sq mi). Some projects involve merging smaller islands through land reclamation to form larger, more functional and habitable islands, as has been done with Jurong Island. The type of sand used in reclamation is found in rivers and beaches, rather than deserts, and is in great demand worldwide. In 2010 Singapore imported almost 15 million tons of sand for its projects, the demand being such that Indonesia, Malaysia, and Vietnam have all restricted or barred the export of sand to Singapore in recent years. As a result, in 2016 Singapore switched to using polders for reclamation, in which an area is enclosed and then pumped dry.
|
What are Singapore main geographical points according to this text? Bullet point please.
|
Signapore has today:
- Land area of 710 km2 (270 sq miles) and is working towards reclaiming more using sand imports.
- 63 islands, with the main island and Jurong Island, Pulau Tekong, Pulau Ubin and Sentosa, and many smaller ones.
- The highest spot of Singapore is the Bukit Timah Hill at 163.63 meter (537 feet).
|
null | false
| null |
Charles Awotwi Pratt was a Ghanaian theologian and priest. He served as the fourth chairman of the Methodist Church Ghana from 1977 to 1981. He was the father of the Titus Awotwi Pratt, the Presiding Bishop of the Methodist Church Ghana from 2014 to 2018.
|
Whos is Charles Awotwi Pratt?
|
Charles Awotwi Pratt was a Ghaniaian Theologian and priest. He served as the forth chairman of the Methodist Church Ghana from 1977 to 1981.
|
1909.00542
| false
| null |
We conducted cross-validation experiments using various values of $t$ and $m$. Table TABREF26 shows the results for the best values of $t$ and $m$ obtained. The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively. To enable a fair comparison we used the same input features in all systems. These input features combine information from the question and the input sentence and are shown in Fig. FIGREF16. The features are based on BIBREF12, and are the same as in BIBREF1, plus the addition of the position of the input snippet. The best SVC and SVR parameters were determined by grid search.
Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models. In particular, we add a sigmoid activation to the final layer, and use cross-entropy as the loss function. The complete architecture is shown in Fig. FIGREF28.
The regressor and classifier used Support Vector Regression (SVR) and Support Vector Classification (SVC) respectively.
Based on the findings of Section SECREF3, we apply minimal changes to the deep learning regression models of BIBREF2 to convert them to classification models.
|
What approaches without reinforcement learning have been tried?
|
The answers are shown as follows:
* Support Vector Regression (SVR) and Support Vector Classification (SVC)
* deep learning regression models of BIBREF2 to convert them to classification models
|
1603.00968
| false
| null |
We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .
FLOAT SELECTED: Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these.
We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .
FLOAT SELECTED: Table 1: Results mean (min, max) achieved with each method. w2v:word2vec. Glv:GloVe. Syn: Syntactic embedding. Note that we experiment with using two and three sets of embeddings jointly, e.g., w2v+Syn+Glv indicates that we use all three of these.
|
What are the baseline models?
|
MC-CNN
MVCNN
CNN
|
null | false
| null |
Seinfeld (/ˈsaɪnfɛld/ SYNE-feld) is an American television sitcom created by Larry David and Jerry Seinfeld. It aired on NBC from July 5, 1989, to May 14, 1998, over nine seasons and 180 episodes. It stars Seinfeld as a fictionalized version of himself and focuses on his personal life with three of his friends: best friend George Costanza (Jason Alexander), former girlfriend Elaine Benes (Julia Louis-Dreyfus) and his neighbor from across the hall, Cosmo Kramer (Michael Richards). It is set mostly in an apartment building in Manhattan's Upper West Side in New York City. It has been described as "a show about nothing", often focusing on the minutiae of daily life. Interspersed in earlier episodes are moments of stand-up comedy from the fictional Jerry Seinfeld, frequently using the episode's events for material.
|
Who is the main cast in Seinfeld TV show?
|
Jerry Seinfeld
Jason Alexander
Julia Louis-Dreyfus
Michael Richards
|
null | false
| 2
|
In order to incorporate the annotated NE labels to predict the exact worker, we build another bi-directional LSTM (named by label Bi-LSTM) based on the crowd-annotated NE label sequence. This Bi-LSTM is used for worker discriminator only. During the decoding of the testing phase, we will never have this Bi-LSTM, because the worker discriminator is no longer required.
Assuming the crowd-annotated NE label sequence annotated by one worker is $\mathbf {\bar{y}} = \bar{y}_1\bar{y}_2 \cdots \bar{y}_n$ , we exploit a looking-up table $\mathbf {E}^{L}$ to obtain the corresponding sequence of their vector representations $\mathbf {x^{\prime }}_1\mathbf {x^{\prime }}_2\cdots \mathbf {x^{\prime }}_n$ , similar to the method that maps characters into their neural representations. Concretely, for one NE label $\bar{y}_t$ ( $t \in [1, n]$ ), we obtain its neural vector by: $\mathbf {x^{\prime }}_t = \text{look-up}(\bar{y}_t, \mathbf {E}^L)$ .
Next step we apply bi-directional LSTM over the sequence $\mathbf {x^{\prime }}_1\mathbf {x^{\prime }}_2\cdots \mathbf {x^{\prime }}_n$ , which can be formalized as:
$$\mathbf {h}_1^{\text{label}} \mathbf {h}_2^{\text{label}} \cdots \mathbf {h}_n^{\text{label}} = \text{Bi-LSTM}(\mathbf {x^{\prime }}_1\mathbf {x^{\prime }}_2\cdots \mathbf {x^{\prime }}_n).$$ (Eq. 16)
The resulting feature sequence is concatenated with the outputs of the common Bi-LSTM, and further be used for worker classification.
In order to incorporate the annotated NE labels to predict the exact worker, we build another bi-directional LSTM (named by label Bi-LSTM) based on the crowd-annotated NE label sequence.
|
What do the authors build in order to incorporate the annotated NE labels to predict the exact worker?
|
The authors build another bi-directional LSTM (named by label Bi-LSTM) based on the crowd-annotated NE label sequence.
|
1709.05563
| false
| null |
The UCL team had access to micro-narratives, as well as context specific meta-data such as demographic information and project details. For a cross-national comparison for policy-makers, the team translated the responses in multiple languages into English using machine translation, in this case Translate API (Yandex Technologies). As a pre-processing step, words without functional meaning (e.g. `I'), rare words that occurred in only one narrative, numbers, and punctuation were all removed. The remaining words were stemmed to remove plural forms of nouns or conjugations of verbs.
The UCL team had access to micro-narratives, as well as context specific meta-data such as demographic information and project details. For a cross-national comparison for policy-makers, the team translated the responses in multiple languages into English using machine translation, in this case Translate API (Yandex Technologies). As a pre-processing step, words without functional meaning (e.g. `I'), rare words that occurred in only one narrative, numbers, and punctuation were all removed. The remaining words were stemmed to remove plural forms of nouns or conjugations of verbs.
|
What elements of natural language processing are proposed to analyze qualitative data?
|
The answers are shown as follows:
* translated the responses in multiple languages into English using machine translation
* words without functional meaning (e.g. `I'), rare words that occurred in only one narrative, numbers, and punctuation were all removed
* remaining words were stemmed to remove plural forms of nouns or conjugations of verbs
|
null | false
| null |
Which of these words are homonyms (words that have two meanings): cleave, cotton, red, hungry
|
yes, yes, no, no
|
|
null | false
| null |
What would make my tzatziki sauce truly exceptional?
|
Tzatziki is a traditional Greek sauce whose base ingredients comprise salted strained yoghurt, garlic, grated cucumber, dill leaves, and olive oil.
To make your tzatziki truly exceptional follow the steps below:
1) Start by squeezing as much as possible the water out of half a cucumber that has been grated. This is the single most important factor that will make your tzatziki source exceptional.
2) Fine-chop the dill leaves using a knife. The dill must be fresh. The volume of the chopped dill leaves should be about the same as the volume of the squeezed grated cucumber.
3) Mix the cucumber and the dill with Greek strained yoghurt . The latter should have a volume that is three to four times that of the cucumber and dill.
4) Add some salt and smashed cloves of fresh garlic.
5) Add a couple of teaspoons of balsamic vinegar.
6) While stirring the mix, gradually keep adding extra virgin olive oil until the sauce obtains a silky texture.
7) Do not add any other ingredients in the mix.
8) Place the mix in the fridge and keep it for a couple of hours and let chemistry do its magic.
9) Remove the residual water, if any, which may have surfaced.
10) Stir the mix once more final time and serve in smaller containers.
|
|
null | false
| null |
What is software craftsmanship
|
Software craftsmanship is an art of writing good code which is easier to understand, maintain and is of good quality. This practice includes many things like writing modular code, unit and integration testing of code, commenting and proper formatting of code and continuous integration & continuous deployment of code etc. All these best practices make the software less bug prone, of high quality, easier to deploy, learn and maintain over time.
|
|
null | false
| null |
Classify these various instruments as either percussion or woodwind instruments: flute, clarinet, drum, bassoon, marimba, cymbal
|
The drum, marimba and cymbal are percussion instruments. The flute, clarinet, and bassoon are woodwind instruments.
|
|
null | false
| 35
|
The model used for translation is the one implemented by Bahdanau et al. Bahdanau2014. A bidirectional LSTM encoder first takes the source sentence and encodes it into a context vector which acts as input for the decoder. The decoder is attention-based where the hidden states of the decoder get as input the weighted sum of all the hidden layer outputs of the encoder alongwith the output of the previous hidden layer and the previously decoded word. This provides a contextual reference into the source language sentence BIBREF4 .
Neural Machine Translation models directly compute the probability of the target language sentence given the source language sentence, word by word for every time step. The model with a basic decoder without the attention module computes the log probability of target sentence given source sentence as the sum of log probabilities of every word given every word before that. The attention-based model, on the other hand, calculates: DISPLAYFORM0
where INLINEFORM0 is the number of words in the target sentence, INLINEFORM1 is the target sentence, INLINEFORM2 is the source sentence, INLINEFORM3 is the fixed length output vector of the encoder and INLINEFORM4 is the weighted sum of all the hidden layer outputs of the encoder at every time step. Both the encoder's output context vector and the weighted sum (known as attention vector) help to improve the quality of translation by enabling selective source sentence lookup.
The decoder LSTM computes: DISPLAYFORM0
where the probability is computed as a function of the decoder's output in the previous time step INLINEFORM0 , the hidden layer vector of the decoder in the current timestep INLINEFORM1 and the context vector from the attention mechanism INLINEFORM2 . The context vector INLINEFORM3 for time step INLINEFORM4 is computed as a weighted sum of the output of the entire sentence using a weight parameter INLINEFORM5 : DISPLAYFORM0
where INLINEFORM0 is the number of tokens in the source sentence, INLINEFORM1 refers to the value of the hidden layer of the encoder at time step INLINEFORM2 , and INLINEFORM3 is the alignment parameter. This parameter is calculated by means of a feed forward neural network to ensure that the alignment model is free from the difficulties of contextualization of long sentences into a single vector. The feed forward network is trained along with the neural translation model to jointly improve the performance of the translation. Mathematically, DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the softmax output of the result of the feedforward network, INLINEFORM1 is the hidden state value of the decoder at timestep INLINEFORM2 and INLINEFORM3 is the encoder's hidden layer annotation at timestep INLINEFORM4 . A concatenation of the forward and the reverse hidden layer parameters of the encoder is used at each step to compute the weights INLINEFORM5 for the attention mechanism. This is done to enable an overall context of the sentence, as opposed to a context of only all the previous words of the sentence for every word in consideration. Fig. FIGREF12 is the general architecture of the neural translation model without the Bidirectional LSTM encoder.
A global attention mechanism is preferred over local attention because the differences in the structures of the languages cannot be mapped efficiently to enable lookup into the right parts of the source sentence. Using local attention mechanism with a monotonic context lookup, where the region around INLINEFORM0 source word is looked up for the prediction of the INLINEFORM1 target word, is impractical because of the structural discordance between the English and Tamil sentences (see Figs. FIGREF37 and FIGREF44 ). The use of gaussian and other such distributions to facilitate local attention would also be inefficient because the existence of various forms of translations for the same source sentence involving morphological and structural variations that don't stay uniform through the entire corpus BIBREF5 .
The No Peepholes (NP) variant of the LSTM cell, formulated in Greff et al. greff2015lstm is used in this experiment as it proved to give the best results amongst all the variants of an LSTM cell. It is specified by means of a gated mechanism designed to ensure that the vanishing gradient problem is prevented. LSTM maintains its hidden layer in two components, the cell vector INLINEFORM0 and the actual hidden layer output vector INLINEFORM1 . The cell vector is ensured to never reach zero by means of a weighted sum of the previous layer's cell vector INLINEFORM2 regulated by the forget gate INLINEFORM3 and an activation of the weighted sum of the input INLINEFORM4 in the current timestep INLINEFORM5 and the previous timestep's hidden layer output vector INLINEFORM6 . The combination is similarly regulated by the input gate INLINEFORM7 . The hidden layer output is determined as an activation of the cell gate, regulated by the output gate INLINEFORM8 . The interplay between these two vectors ( INLINEFORM9 and INLINEFORM10 ) at every timestep ensures that the problem of vanishing gradients doesn't occur. The three gates are also formed as a sigmoid of the weighted sum of the previous hidden layer output INLINEFORM11 and the input in the current timestep INLINEFORM12 . The output generated out of the LSTM's hidden layer is specified as a weighted softmax over the hidden layer output INLINEFORM13 . The learnable parameters of an LSTM cell are all the weights INLINEFORM14 and the biases INLINEFORM15 . DISPLAYFORM0
The LSTM specified by equations 7 through 11 is the one used for the decoder of the model. The encoder uses a bidirectional RNN LSTM cell in which there are two hidden layer components INLINEFORM0 and INLINEFORM1 that contribute to the output INLINEFORM2 of each time step INLINEFORM3 . Both the components have their own sets of LSTM equations in such a way that INLINEFORM4 for every timestep is computed from the first timestep till the INLINEFORM5 token is reached and INLINEFORM6 is computed from the INLINEFORM7 timestep backwards until the first token is reached. All the five vectors of the two components are all exactly the same as the LSTM equations specified with one variation in the computation of the result. DISPLAYFORM0
The model used for translation is the one implemented by Bahdanau et al. (2014).
|
Which model is chosen for translation?
|
The model used for translation is the one implemented by Bahdanau et al. (2014).
|
null | false
| null |
What are the only two countries in South America that do not touch Brazil?
|
Chile and Ecuador.
|
|
null | false
| null |
What do you love most about spring?
|
I love the weather in spring and how the sun shines brighter. The warmth feels so good after you’ve had a cold winter. I love to see the flowers blooming and to feel the crisp morning air. Spring is a special time and makes you know that summer is near.
|
|
null | false
| 52
|
Feature ablation studies are conducted to assess the informativeness of a feature group by quantifying the change in predictive power when comparing the performance of a classifier trained with the all feature groups versus the performance without a particular feature group. We conducted a feature ablation study by holding out (sans) each feature group and training and testing the support vector model using a linear kernel and 5-fold, stratified cross-validation. We report the average F1-score from our baseline approach (all feature groups) and report the point difference (+ or -) in F1-score performance observed by ablating each feature set.
By ablating each feature group from the full dataset, we observed the following count of features - sans lexical: 185, sans syntactic: 16,935, sans emotion: 16,954, sans demographics: 16,946, sans sentiment: 16,950, sans personality: 16,946, and sans LIWC: 16,832. In Figure 1, compared to the baseline performance, significant drops in F1-scores resulted from sans lexical for depressed mood (-35 points), disturbed sleep (-43 points), and depressive symptoms (-45 points). Less extensive drops also occurred for evidence of depression (-14 points) and fatigue or loss of energy (-3 points). In contrast, a 3 point gain in F1-score was observed for no evidence of depression. We also observed notable drops in F1-scores for disturbed sleep by ablating demographics (-7 points), emotion (-5 points), and sentiment (-5 points) features. These F1-score drops were accompanied by drops in both recall and precision. We found equal or higher F1-scores by removing non-lexical feature groups for no evidence of depression (0-1 points), evidence of depression (0-1 points), and depressive symptoms (2 points).
Unsurprisingly, lexical features (unigrams) were the largest contributor to feature counts in the dataset. We observed that lexical features are also critical for identifying depressive symptoms, specifically for depressed mood and for disturbed sleep. For the classes higher in the hierarchy - no evidence of depression, evidence of depression, and depressive symptoms - the classifier produced consistent F1-scores, even slightly above the baseline for depressive symptoms and minor fluctuations of change in recall and precision when removing other feature groups suggesting that the contribution of non-lexical features to classification performance was limited. However, notable changes in F1-score were observed for the classes lower in the hierarchy including disturbed sleep and fatigue or loss of energy. For instance, changes in F1-scores driven by both recall and precision were observed for disturbed sleep by ablating demographics, emotion, and sentiment features, suggesting that age or gender (“mid-semester exams have me restless”), polarity and subjective terms (“lack of sleep is killing me”), and emoticons (“wide awake :(”) could be important for both identifying and correctly classifying a subset of these tweets.
Feature ablation studies are conducted to assess the informativeness of a feature group by quantifying the change in predictive power when comparing the performance of a classifier trained with the all feature groups versus the performance without a particular feature group.
|
How does feature ablation work?
|
Feature ablation studies are conducted to assess the informativeness of a feature group by quantifying the change in predictive power when comparing the performance of a classifier trained with the all feature groups versus the performance without a particular feature group.
|
null | false
| null |
What is the difference between their, there, and they're?
|
Their, there, and they're are homophones. Homophones are words that sound similar but have different meanings and spellings. Their is a possessive pronoun, as in "their bicycle is green". They're is a contraction of "they are", as in "they're going to the game tomorrow". There is an adjective, as in "she is always there for her friends". There can also be used as a noun, as in "stay out of there", or as an adverb, as in "stop over there".
|
|
null | false
| 58
|
The race and gender bias evaluation was carried out on the output of the 219 automatic systems that participated in SemEval-2018 Task 1: Affect in Tweets BIBREF5 . The shared task included five subtasks on inferring the affectual state of a person from their tweet: 1. emotion intensity regression, 2. emotion intensity ordinal classification, 3. valence (sentiment) regression, 4. valence ordinal classification, and 5. emotion classification. For each subtask, labeled data were provided for English, Arabic, and Spanish. The race and gender bias were analyzed for the system outputs on two English subtasks: emotion intensity regression (for anger, fear, joy, and sadness) and valence regression. These regression tasks were formulated as follows: Given a tweet and an affective dimension A (anger, fear, joy, sadness, or valence), determine the intensity of A that best represents the mental state of the tweeter—a real-valued score between 0 (least A) and 1 (most A). Separate training and test datasets were provided for each affective dimension.
Training sets included tweets along with gold intensity scores. Two test sets were provided for each task: 1. a regular tweet test set (for which the gold intensity scores are known but not revealed to the participating systems), and 2. the Equity Evaluation Corpus (for which no gold intensity labels exist). Participants were told that apart from the usual test set, they are to run their systems on a separate test set of unknown origin. The participants were instructed to train their system on the tweets training sets provided, and that they could use any other resources they may find or create. They were to run the same final system on the two test sets. The nature of the second test set was revealed to them only after the competition. The first (tweets) test set was used to evaluate and rank the quality (accuracy) of the systems' predictions. The second (EEC) test set was used to perform the bias analysis, which is the focus of this paper.
Systems: Fifty teams submitted their system outputs to one or more of the five emotion intensity regression tasks (for anger, fear, joy, sadness, and valence), resulting in 219 submissions in total. Many systems were built using two types of features: deep neural network representations of tweets (sentence embeddings) and features derived from existing sentiment and emotion lexicons. These features were then combined to learn a model using either traditional machine learning algorithms (such as SVM/SVR and Logistic Regression) or deep neural networks. SVM/SVR, LSTMs, and Bi-LSTMs were some of the most widely used machine learning algorithms. The sentence embeddings were obtained by training a neural network on the provided training data, a distant supervision corpus (e.g., AIT2018 Distant Supervision Corpus that has tweets with emotion-related query terms), sentiment-labeled tweet corpora (e.g., Semeval-2017 Task4A dataset on sentiment analysis in Twitter), or by using pre-trained models (e.g., DeepMoji BIBREF20 , Skip thoughts BIBREF21 ). The lexicon features were often derived from the NRC emotion and sentiment lexicons BIBREF22 , BIBREF23 , BIBREF24 , AFINN BIBREF25 , and Bing Liu Lexicon BIBREF26 .
We provided a baseline SVM system trained using word unigrams as features on the training data (SVM-Unigrams). This system is also included in the current analysis.
Measuring bias: To examine gender bias, we compared each system's predicted scores on the EEC sentence pairs as follows:
Thus, eleven pairs of scores (ten pairs of scores from ten noun phrase pairs and one pair of scores from the averages on name subsets) were examined for each template–emotion word instantiation. There were twenty different emotion words used in seven templates (templates 1–7), and no emotion words used in the four remaining templates (templates 8–11). In total, INLINEFORM0 pairs of scores were compared.
Similarly, to examine race bias, we compared pairs of system predicted scores as follows:
Thus, one pair of scores was examined for each template–emotion word instantiation. In total, INLINEFORM0 pairs of scores were compared.
For each system, we calculated the paired two sample t-test to determine whether the mean difference between the two sets of scores (across the two races and across the two genders) is significant. We set the significance level to 0.05. However, since we performed 438 assessments (219 submissions evaluated for biases in both gender and race), we applied Bonferroni correction. The null hypothesis that the true mean difference between the paired samples was zero was rejected if the calculated p-value fell below INLINEFORM0 .
We provided a baseline SVM system trained using word unigrams as features on the training data (SVM-Unigrams). This system is also included in the current analysis.
|
What baseline does the author provide?
|
They provided a baseline SVM system trained using word unigrams as features on the training data (SVM-Unigrams). This system is also included in the current analysis.
|
null | false
| null |
What are some items I should stock up on from the grocery store to have in my pantry
|
canned beans, rice, shelf-stable plant-based milk, crackers, salsa, pasta, pasta sauce, oatmeal, pretzels, maple syrup, noodles, canned tomatoes, tortilla chips, tea, almond butter, walnuts, balsamic vinegar, tahini, dark chocolate, grits, dried cranberries, pistachios
|
|
null | false
| null |
How can technology be used to address challenges in the healthcare industry?
|
Current challenges in the healthcare industry can be summarized as a supply and demand mismatch. The current demand for healthcare services outpaces the available supply of care delivery capacity. Demand continues to grow due to aging populations and increasing prevalence of expensive chronic diseases. Supply is constrained due to labor shortages and limited care infrastructure. Many see technology as a force multiplier that will allow health systems to achieve more with the same or even fewer resources. Technology can allow for more efficient allocation of resources, more scalable models of care, and better data and analytics.
|
|
null | false
| null |
The movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator.
By historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation.
The process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires.
In engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative. If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave. Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance. These properties however can become important when circuitry is subjected to transients, such as when first energised.
|
Extract examples of electrical conduction from the following passage.
|
Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second, the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires.
|
null | false
| null |
Name three poems by William Wordsworth.
|
Tintern Abbey, I Wandered Lonely as a Cloud, and the Solitary Reaper
|
|
null | false
| null |
Sheep are most likely descended from the wild mouflon of Europe and Asia, with Iran being a geographic envelope of the domestication center. One of the earliest animals to be domesticated for agricultural purposes, sheep are raised for fleeces, meat (lamb, hogget or mutton) and milk. A sheep's wool is the most widely used animal fiber, and is usually harvested by shearing. In Commonwealth countries, ovine meat is called lamb when from younger animals and mutton when from older ones; in the United States, meat from both older and younger animals is usually called lamb. Sheep continue to be important for wool and meat today, and are also occasionally raised for pelts, as dairy animals, or as model organisms for science.
|
List the agricultural reasons why sheep were domesticated
|
Here is a list of the agricultural reasons why sheep were domesticated:
* Fleeces - A sheep's wool is the most widely used animal fiber and is usually harvested by shearing
* Meat - Sheep are domesticated for their meet in Commonwealth countries (called lamb when from younger animals and mutton when from older ones) and in the United States (meat from both older and younger animals is usually called lamb)
* Milk - Sheep are occasionally raised as dairy animals
|
null | false
| null |
At the collegiate level in the United States, the lightweight weight requirements can be different depending on competitive season. For fall regattas (typically head races), the lightweight cutoff for men is 165.0 lb. and 135.0 lb. for women. In the spring season (typically sprint races), the lightweight cutoff for men is 160.0 lb., with a boat average of 155.0 lb. for the crew; for women, the lightweight cutoff is 130.0 lb.
|
What are the lightweight weight requirements at the collegiate level in the United States?
|
In the United States, the lightweight weight requirements differ based on the season. In the spring season, the lightweight cutoff for men is 160 pounds with a boat average of 155 pounds. The lightweight cutoff for women is 130 pounds.
|
1907.08501
| true
| null |
For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online.
To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given. INLINEFORM1 is the faction of correct answer (parts) given divided by all correct ones in the gold answer, and INLINEFORM2 is the harmonic mean of INLINEFORM3 and INLINEFORM4 . As an example, if the question is “Where was Albert Einstein born?” (gold answer: “Ulm”), and the system gives two answers “Ulm” and “Bern”, then INLINEFORM5 , INLINEFORM6 and INLINEFORM7 .
For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online.
To assess the correctness of the answers given both by participants in the DQA experiments, and by the QALD system, we use the classic information retrieval metrics of precision (P), recall (R), and F1. INLINEFORM0 measures the fraction of relevant (correct) answer (items) given versus all answers (answer items) given.
|
Do they test performance of their approaches using human judgements?
|
Yes.
|
null | false
| 147
|
Table TABREF14 shows the effectiveness of our model (multi-task transformer) over the baseline transformer BIBREF8 . Our model achieves significant performance gains in the test sets over the baseline for both Italian and Finnish query translation. The overall low MAP for NMT can possibly be improved with larger TC. Moreover, our model validation approach requires access to RC index, and it slows down overall training process. Hence, we could not train our model for a large number of epochs - it may be another cause of the low performance.
We want to show that translation terms generated by our multi-task transformer are roughly equally likely to be seen in the Europarl corpus (TC) or the CLEF corpus (RC). Given a translation term INLINEFORM0 , we compute the ratio of the probability of seeing INLINEFORM1 in TC and RC, INLINEFORM2 . Here, INLINEFORM3 and INLINEFORM4 is calculated similarly. Given a query INLINEFORM5 and its translation INLINEFORM6 provided by model INLINEFORM7 , we calculate the balance of INLINEFORM8 , INLINEFORM9 . If INLINEFORM10 is close to 1, the translation terms are as likely in TC as in RC. Figure FIGREF15 shows the balance values for transformer and our model for a random sample of 20 queries from the validation set of Italian queries, respectively. Figure FIGREF16 shows the balance values for transformer and our model for a random sample of 20 queries from the test set of Italian queries, respectively. It is evident that our model achieves better balance compared to baseline transformer, except for a very few cases.
Given a query INLINEFORM0 , consider INLINEFORM1 as the set of terms from human translation of INLINEFORM2 and INLINEFORM3 as the set of translation terms generated by model INLINEFORM4 . We define INLINEFORM5 and INLINEFORM6 as precision and recall of INLINEFORM7 for model INLINEFORM8 . In Table TABREF19 , we report average precision and recall for both transformer and our model across our train and validation query set over two language pairs. Our model generates precise translation, i.e. it avoids terms that might be useless or even harmful for retrieval. Generally, from our observation, avoided terms are highly likely terms from TC and they are generated because of translation model overfitting. Our model achieves a regularization effect through an auxiliary task. This confirms results from existing multi-tasking literature BIBREF15 .
To explore translation quality, consider pair of sample translations provided by two models. For example, against an Italian query, medaglia oro super vinse medaglia oro super olimpiadi invernali lillehammer, translated term set from our model is {gold, coin, super, free, harmonising, won, winter, olympics}, while transformer output is {olympic, gold, one, coin, super, years, won, parliament, also, two, winter}. Term set from human translation is: {super, gold, medal, won, lillehammer, olypmic, winter, games}. Transformer comes up with terms like parliament, also, two and years that never appears in human translation. We found that these terms are very likely in Europarl and rare in CLEF. Our model also generates terms such as harmonising, free, olympics that not generated by transformer. However, we found that these terms are equally likely in Europarl and CLEF.
Our model achieves significant performance gains in the test sets over the baseline for both Italian and Finnish query translation.
|
Does the model proposed by the authors achieve significant performance gains in the test sets over the baseline for both Italian and Finnish query translation?
|
Yes, it does.
|
null | false
| null |
Asa Butler Wood (August 26, 1865 - May 7, 1945) was an American politician and newspaper publisher, serving as a state senator in the U.S. state of Nebraska from 1924 to 1930. Born in Wapello County, Iowa. Wood entered the newspaper business as a printer in his teens In 1887, at the age of 21, he founded the Gering Courier, a newspaper he ran until his death in 1945. He was known as "the dean of the newspapermen of the Nebraska Panhandle,” and was at one time president of the Nebraska Press Association. He was also a breeder of cattle, and, like many publishers of that time, the local postmaster.
In 1925, he was elected state senator of Nebraska's Thirty-third District. He served as a Republican senator through 1930, and was known as an advocate of clean government and irrigation policy.
Described as a "walking encyclopedia" of western Nebraska history,he served as president of the Nebraska Historical Society from 1936 to 1938.
He left the Courier to his son, Warren Wood, on his death in 1945.
|
Who is Asa Wood?
|
Born in Iowa, Asa Wood was a politician and newspaper publisher who later served as a state senator of Nebraska.
|
null | false
| null |
Why are shiba inus the best dogs?
|
Shiba inus are the best dogs because they are intuitive, smart, independent, affectionate (at times) and easy to train. They are also the perfect size (moderately compact) and weighs on average 20-25 lbs, making them small enough to carry but sturdy enough to hug. This makes them great for smaller-sized homes or apartments. They tend to also be quite friendly to other humans and pets, so it's easy to take them to dog parks, on hikes and to beaches without having to worry about aggression towards humans or other dogs.
The most important reason -- shiba inus are just unbelievably cute -- they look like foxes, with their prick ears, lush curly tail and thick double coats.
|
|
null | false
| 314
|
In information retrieval (IR), queries and documents are typically represented by term vectors where each term is a content word and weighted by tf-idf, i.e. the product of the term frequency and the inverse document frequency, or other weighting schemes BIBREF0 . The similarity of a query and a document is then determined as a dot product or cosine similarity. Although this works reasonably, the traditional IR scheme often fails to find relevant documents when synonymous or polysemous words are used in a dataset, e.g. a document including only “neoplasm" cannot be found when the word “cancer" is used in a query. One solution of this problem is to use query expansion BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 or dictionaries, but these alternatives still depend on the same philosophy, i.e. queries and documents should share exactly the same words.
While the term vector model computes similarities in a sparse and high-dimensional space, the semantic analysis methods such as latent semantic analysis (LSA) BIBREF5 , BIBREF6 and latent Dirichlet allocation (LDA) BIBREF7 learn dense vector representations in a low-dimensional space. These methods choose a vector embedding for each term and estimate a similarity between terms by taking an inner product of their corresponding embeddings BIBREF8 . Since the similarity is calculated in a latent (semantic) space based on context, the semantic analysis approaches do not require having common words between a query and documents. However, it has been shown that LSA and LDA methods do not produce superior results in various IR tasks BIBREF9 , BIBREF10 , BIBREF11 and the classic ranking method, BM25 BIBREF12 , usually outperforms those methods in document ranking BIBREF13 , BIBREF14 .
Neural word embedding BIBREF15 , BIBREF16 is similar to the semantic analysis methods described above. It learns low-dimensional word vectors from text, but while LSA and LDA utilize co-occurrences of words, neural word embedding learns word vectors to predict context words BIBREF10 . Moreover, training of semantic vectors is derived from neural networks. Both co-occurrence and neural word embedding approaches have been used for lexical semantic tasks such as semantic relatedness (e.g. king and queen), synonym detection (e.g. cancer and carcinoma) and concept categorization (e.g. banana and pineapple belong to fruits) BIBREF10 , BIBREF17 . But, Baroni et al. Baroni2014 showed that neural word embedding approaches generally performed better on such tasks with less effort required for parameter optimization. The neural word embedding models have also gained popularity in recent years due to their high performance in NLP tasks BIBREF18 .
Here we present a query-document similarity measure using a neural word embedding approach. This work is particularly motivated by the Word Mover's Distance BIBREF19 . Unlike the common similarity measure taking query/document centroids of word embeddings, the proposed method evaluates a distance between individual words from a query and a document. Our first experiment was performed on the TREC 2006 and 2007 Genomics benchmark sets BIBREF20 , BIBREF21 , and the experimental results showed that our approach was better than BM25 ranking. This was solely based on matching queries and documents by the semantic measure and no other feature was used for ranking documents.
In general, conventional ranking models (e.g. BM25) rely on a manually designed ranking function and require heuristic optimization for parameters BIBREF22 , BIBREF23 . In the age of information explosion, this one-size-fits-all solution is no longer adequate. For instance, it is well known that links to a web page are an important source of information in web document search BIBREF24 , hence using the link information as well as the relevance between a query and a document is crucial for better ranking. In this regard, learning to rank BIBREF22 has drawn much attention as a scheme to learn how to combine diverse features. Given feature vectors of documents and their relevance levels, a learning to rank approach learns an optimal way of weighting and combining multiple features.
We argue that the single scores (or features) produced by BM25 and our proposed semantic measure complement each other, thus merging these two has a synergistic effect. To confirm this, we measured the impact on document ranking by combining BM25 and semantic scores using the learning to rank approach, LamdaMART BIBREF25 , BIBREF26 . Trained on PubMed user queries and their click-through data, we evaluated the search performance based on the most highly ranked 20 documents. As a result, we found that using our semantic measure further improved the performance of BM25.
Taken together, we make the following important contributions in this work. First, to the best of our knowledge, this work represents the first investigation of query-document similarity for information retrieval using the recently proposed Word Mover's Distance. Second, we modify the original Word Mover's Distance algorithm so that it is computationally less expensive and thus more practical and scalable for real-world search scenarios (e.g. biomedical literature search). Third, we measure the actual impact of neural word embeddings in PubMed by utilizing user queries and relevance information derived from click-through data. Finally, on TREC and PubMed datasets, our proposed method achieves stronger performance than BM25.
This was solely based on matching queries and documents by the semantic measure and no other feature was used for ranking documents.
|
What is the proposed approach based on?
|
It is based on matching queries and documents by the semantic measure.
|
null | false
| null |
People generally take advantage of the high temperatures by spending more time outdoors during summer. Activities such as travelling to the beach and picnics occur during the summer months. Sports including cricket, association football (soccer), horse racing, basketball, American football, volleyball, skateboarding, baseball, softball, tennis and golf are played.
Water sports also occur. These include water skiing, wakeboarding, swimming, surfing, tubing and water polo. The modern Olympics have been held during the summer months every four years since 1896. The 2000 Summer Olympics, in Sydney, were held in spring and the 2016 Summer Olympics, in Rio de Janeiro, were held in winter.
In the United States, many television shows made for children are released during the summer, as children are off school.
Conversely, the music and film industries generally experience higher returns during the summer than other times of the year and market their summer hits accordingly. Summer is popular for animated movies to be released theatrically in movie theaters.
|
From the given text, suggest some good activities for passing time during summer season.
|
1. During summer season, Outdoor activities such as travelling to beach/picnics are popular.
2. Outdoor games such as Cricket, Soccer, Horse racing, Basketball, American football, Volleyball, Skateboarding, Baseball, Softball, Tennis and Golf are often played in summer season.
3. Water sports such as Water skiing, Wakeboarding, Swimming, Surfing, Tubing and Water polo are also favoured by people in summer months.
4. In United States, children can enjoy watching television as many television shows are released during summer as children are off school.
5. Animated movies are often launched across theatres in summer.
|
null | false
| 438
|
We report all timing estimations, unless stated otherwise, based on using an RTX 2080 Ti graphics card on the same single machine. We report all sample speed timings in seconds per example. Sample timing estimations were calculated with batches of 100 sequences, and calculated on the same single machine. The only example were this was not the case was for DARK0-Grad which was performed in batches of 10 due to memory constraints. Model training and sample refinement was performed with resources available at the time but these typically consisted of V100 graphics cards. As such, we report timing for training, refinement, and simulated annealing experiments as they would take on V100 graphics cards. The average time taken for refinement is 51.3 seconds
|
You state in section 3.1 that you do not consider Cysteine for experimental reasons? Why? Can you elaborate on the implied limitations of this choice?
|
Thank you, we agree this should have been explained in more detail and we have added further explanation to the text in 3.1 that additionally links to further detail in Appendix A.6. In short, not including cysteine is common in de novo protein design studies and does not pose any significant limitations to our work or the conclusions drawn from it. It also provides a benefit. Its inclusion makes expression of any designed sequence in bacteria (the most common way to produce designs in the lab) more challenging.
Cysteine is special among amino acids, as it is mainly occurs in proteins to form disulfide bonds, particularly secreted proteins and outwards facing membrane proteins. This happens between pairs of cysteines, and forms a strong, usually stabilizing, bond between them. This bond cannot form in the cytosol of the cell, so disulfide containing proteins are expressed in an otherwise different way. This is the root cause of why it complicates validation of designed sequences. In de novo design literature the inclusion of cysteine is typically discussed as the inclusion of disulfide bonds. De novo designed proteins are known for being hyperstable without disulfides, and it is often a question of if the inclusion of disulfide bonds is justified. In other words, its common that the default stance is to not include disulfide bonds or to add them to preexisting candidates. For example, the Rosetta software package has separate protocols explicitly for adding disulfides to designs. Similarly to trDesign, we don't feel that including them here provides a clear benefit compared to the complications it brings.
|
null | false
| null |
What is heavier, 1 centimetre or 5 miles?
|
Unfortunately, I can't answer that question. A centimetre and a mile are both units of measurement and therefore don't have an associated weight.
|
|
null | false
| null |
How can I improve Kafka performance with Spark?
|
To improve performance, you can adjust the consumer and producer settings to match your use case. Additionally, you can use the latest versions of Kafka and Spark and increase the number of partitions in the topics. You can also optimize the code by using efficient transformations and caching.
|
|
null | false
| 397
|
The goal of meeting transcription is to have machines generate speaker-annotated transcripts of natural meetings based on their audio and optionally video recordings. Meeting transcription and analytics would be a key to enhancing productivity as well as improving accessibility in the workplace. It can also be used for conversation transcription in other domains such as healthcare BIBREF0. Research in this space was promoted in the 2000s by NIST Rich Transcription Evaluation series and public release of relevant corpora BIBREF1, BIBREF2, BIBREF3. While systems developed in the early days yielded high error rates, advances have been made in individual component technology fields, including conversational speech recognition BIBREF4, BIBREF5, far-field speech processing BIBREF6, BIBREF7, BIBREF8, and speaker identification and diarization BIBREF9, BIBREF10, BIBREF11. When cameras are used in addition to microphones to capture the meeting conversations, speaker identification quality could be further improved thanks to the computer vision technology. These trends motivated us to build an end-to-end audio-visual meeting transcription system to identify and address unsolved challenges. This report describes our learning, with focuses on overall architecture design, overlapped speech recognition, and audio-visual speaker diarization.
When designing meeting transcription systems, different constraints must be taken into account depending on targeted scenarios. In some cases, microphone arrays are used as an input device. If the names of expected meeting attendees are known beforehand, the transcription system should be able to provide each utterance with the true identity (e.g., “Alice” or “Bob”) instead of a randomly generated label like “Speaker1”. It is often required to show the transcription in near real time, which makes the task more challenging.
This work assumes the following scenario. We consider a scheduled meeting setting, where an organizer arranges a meeting in advance and sends invitations to attendees. The transcription system has access to the invitees' names. However, actual attendees may not completely match those invited to the meeting. The users are supposed to enroll themselves in the system beforehand so that their utterances in the meeting can be associated with their names. The meeting is recorded with an audio-visual device equipped with a seven-element circular microphone array and a fisheye camera. Transcriptions must be shown with a latency of up to a few seconds.
This paper investigates three key challenges.
Speech overlaps: Recognizing overlapped speech has been one of the main challenges in meeting transcription with limited tangible progress. Numerous multi-channel speech separation methods were proposed based on independent component analysis or spatial clustering BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17. However, there was little successful effort to apply these methods to natural meetings. Neural network-based single-channel separation methods using techniques like permutation invariant training (PIT) BIBREF18 or deep clustering (DC) BIBREF19 are known to be vulnerable to various types of acoustic distortion, including reverberation and background noise BIBREF20. In addition, these methods were tested almost exclusively on small-scale segmented synthetic data and have not been applied to continuous conversational speech audio. Although the recently held CHiME-5 challenge helped the community make a step forward to a realistic setting, it still allowed the use of ground-truth speaker segments BIBREF21, BIBREF22.
We address this long-standing problem with a continuous speech separation (CSS) approach, which we proposed in our latest conference papers BIBREF23, BIBREF24. It is based on an observation that the maximum number of simultaneously active speakers is usually limited even in a large meeting. According to BIBREF25, two or fewer speakers are active for more than 98% of the meeting time. Thus, given continuous multi-channel audio observation, we generate a fixed number, say $N$, of time-synchronous signals. Each utterance is separated from overlapping voices and background noise. Then, the separated utterance is spawned from one of the $N$ output channels. For periods where the number of active speakers is fewer than $N$, the extra channels generate zeros. We show how continuous speech separation can fit in with an overall meeting transcription architecture to generate speaker-annotated transcripts.
Note that our speech separation system does not make use of a camera signal. While much progress has been made in audio-visual speech separation, the challenge of dealing with all kinds of image variations remains unsolved BIBREF26, BIBREF27, BIBREF28.
Extensible framework: It is desirable that a single transcription system be able to support various application settings for both maintenance and scalability purposes. While this report focuses on the audio-visual setting, our broader work covers an audio-only setting as well as the scenario where no prior knowledge of meeting attendees is available. A modular and versatile architecture is desired to encompass these different settings.
To this end, we propose a framework called SRD, which stands for “separate, recognize, and diarize”, where CSS, speech recognition, and speaker diarization takes place in tandem. Performing CSS at the beginning allows the other modules to operate on overlap-free signals. Diarization is carried out after speech recognition because its implementation can vary significantly depending on the application settings. By choosing an appropriate diarization module for each setting, multiple use cases can be supported without changing the rest of the system. This architecture also allows transcriptions to be displayed in real time without speaker information. Speaker identities for each utterance may be shown after a couple of seconds.
Audio-visual speaker diarization: Speaker diarization, a process of segmenting input audio and assigning speaker labels to the individual segments, can benefit from a camera signal. The phenomenal improvements that have been made to face detection and identification algorithms by convolutional neural networks (CNNs) BIBREF29, BIBREF30, BIBREF31 make the camera signal very appealing for speaker diarization. While much prior work assumes the batch processing scenario where the entire meeting recording can be processed multiple times, several studies deal with online processing BIBREF32, BIBREF33, BIBREF34, BIBREF35. However, no previous studies comprehensively address the challenges that one might encounter in real meetings. BIBREF32, BIBREF33 do not cope with speech overlaps. While the methods proposed in BIBREF34, BIBREF35 address the overlap issue, they rely solely on spatial cues and thus are not applicable when multiple speakers sit side by side.
Our diarization method handles overlapping utterances as well as co-located speakers by utilizing the time-frequency (TF) masks generated by CSS in speaker identification and sound source localization (SSL). In addition, several enhancements are made to face identification to improve robustness to image variations caused by face occlusions, extreme head pose, lighting conditions, and so on.
These trends motivated us to build an end-to-end audio-visual meeting transcription system to identify and address unsolved challenges.
|
What system do the authors develop?
|
An end-to-end audio-visual meeting transcription system to identify and address unsolved challenges.
|
1910.00912
| false
| null |
We tested the system on two datasets, different in size and complexity of the addressed language.
Experimental Evaluation ::: Datasets ::: NLU-Benchmark dataset
The first (publicly available) dataset, NLU-Benchmark (NLU-BM), contains $25,716$ utterances annotated with targeted Scenario, Action, and involved Entities. For example, “schedule a call with Lisa on Monday morning” is labelled to contain a calendar scenario, where the set_event action is instantiated through the entities [event_name: a call with Lisa] and [date: Monday morning]. The Intent is then obtained by concatenating scenario and action labels (e.g., calendar_set_event). This dataset consists of multiple home assistant task domains (e.g., scheduling, playing music), chit-chat, and commands to a robot BIBREF7.
Experimental Evaluation ::: Datasets ::: ROMULUS dataset
The second dataset, ROMULUS, is composed of $1,431$ sentences, for each of which dialogue acts, semantic frames, and corresponding frame elements are provided. This dataset is being developed for modelling user utterances to open-domain conversational systems for robotic platforms that are expected to handle different interaction situations/patterns – e.g., chit-chat, command interpretation. The corpus is composed of different subsections, addressing heterogeneous linguistic phenomena, ranging from imperative instructions (e.g., “enter the bedroom slowly, turn left and turn the lights off ”) to complex requests for information (e.g., “good morning I want to buy a new mobile phone is there any shop nearby?”) or open-domain chit-chat (e.g., “nope thanks let's talk about cinema”). A considerable number of utterances in the dataset is collected through Human-Human Interaction studies in robotic domain ($\approx $$70\%$), though a small portion has been synthetically generated for balancing the frame distribution.
We tested the system on two datasets, different in size and complexity of the addressed language.
Experimental Evaluation ::: Datasets ::: NLU-Benchmark dataset
The first (publicly available) dataset, NLU-Benchmark (NLU-BM), contains $25,716$ utterances annotated with targeted Scenario, Action, and involved Entities.
Experimental Evaluation ::: Datasets ::: ROMULUS dataset
The second dataset, ROMULUS, is composed of $1,431$ sentences, for each of which dialogue acts, semantic frames, and corresponding frame elements are provided. This dataset is being developed for modelling user utterances to open-domain conversational systems for robotic platforms that are expected to handle different interaction situations/patterns – e.g., chit-chat, command interpretation.
|
Which publicly available NLU dataset is used?
|
The answers are shown as follows:
* ROMULUS dataset
* NLU-Benchmark dataset
|
2002.01861
| false
| null |
We extend BERT Base-Chinese (12-layer, 768-hidden, 12-heads, 110M parameters) for sequence labeling. All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end. All inputs are then padded to a length of 256 tokens. After feeding through BERT, we obtain the hidden state of the final layer, denoted as ($h_{1}$, $h_{2}$, ... $h_{N}$) where $N$ is the max length setting. We add a fully-connected layer and softmax on top, and the final prediction is formulated as:
All documents are segmented into paragraphs and processed at the paragraph level (both training and inference); this is acceptable because we observe that most paragraphs are less than 200 characters. The input sequences are segmented by the BERT tokenizer, with the special [CLS] token inserted at the beginning and the special [SEP] token added at the end.
|
At what text unit/level were documents processed?
|
The answers are shown as follows:
* documents are segmented into paragraphs and processed at the paragraph level
|
null | false
| null |
Who are the known members of the student group named Dumbledor's Army, from the Harry Potter book series.
|
The known list of students in Dumbledor's Army is as follows:
Harry Potter (leader and teacher)
Ron Weasley (co-founder)
Hermione Granger (co-founder)
Neville Longbottom (Leader)
Ginny Weasley (Leader)
Luna Lovegood (Leader)
Angelina Johnson
Alice Tolipan
Alicia Spinnet
Anthony Goldstein
Cho Chang
Colin Creevey
Cormac McLaggen
Dean Thomas
Dennis Creevey
Ernie Macmillan
Fred Weasley
George Weasley
Hannah Abbott
Justin Finch-Fletchley
Katie Bell
Lavender Brown
Leanne
Lee Jordan
Luca Caruso
Maisy Reynolds
Marietta Edgecombe
Michael Corner
Nigel Wolpert
Padma Patil
Parvati Patil
Romilda Vane
Seamus Finnigan
Susan Bones
Terry Boot
Zacharias Smith
|
|
null | false
| null |
What is the name of the season that begins in the summer and ends in the winter?
|
Fall
|
|
null | false
| null |
how to express love without saying it out loud?
|
The book Five Love Languages talks about ways to express and experience love between partners are compliments, quality time, gifting, acts of service, physical touch. Although you do need to speak out your compliments, the other four can be done without a word. Some people also claim that preparing food and enjoying food together is another good way of bonding. Other suggestions include having a date routine like watching favorite movies or going to a dance hall and dancing to some Barry White on a fixed day every month.
|
|
null | false
| null |
Tell me more about congratulatory message.
|
A message of congratulations or best wishes sent to someone after they have accomplished a noteworthy feat or milestone is known as a congratulatory message. Congratulatory messages can be conveyed via a variety of media, including social media posts, emails, cards, emails, and text messages.
It's customary to send congratulations to recognise and congratulate someone's accomplishment of a goal or significant milestone. They can be delivered for a range of events, such as graduations, promotions, new employment, marriages, births, or personal triumphs.
Depending on the relationship between the sender and the recipient as well as the type of accomplishment being honoured, the tone and style of a congrats message may change. Depending on the congratulations, some remarks could be formal and professional.
|
|
null | false
| null |
why is Mahatma Gandhi loved by everyone?
|
Mahatma Gandhi preached non-cooperation using non-violent methods as a way to win against the oppressive rulers without resorting to guns. This is difficult to do and requires public's cooperation.
|
|
null | false
| null |
What is an API?
|
An application programming interface (API) is a way for two or more computer programs to communicate with each other. It is a type of software interface, offering a service to other pieces of software.
|
|
null | false
| null |
2022 UEFA European Under 19 Championship qualification
|
The 2022 UEFA European Under-19 Championship qualifying competition was a men's under-19 football competition that determined the seven teams joining the automatically qualified hosts Slovakia in the 2022 UEFA European Under-19 Championship final tournament. Players born on or after 1 January 2003 were eligible to participate.
Originally, the qualifying competition would use a new format with teams split into three different leagues with promotion and relegation, with the draw of the first round under the new format already held in December 2019. However, on 17 June 2020, UEFA announced that the introduction of the new format had been postponed to the next edition due to the COVID-19 pandemic, and qualification for this edition would use the previous format involving two rounds only.
Apart from Slovakia, 53 of the remaining 54 UEFA member national teams entered the qualifying competition, where the original format consisted of a qualifying round that took place in autumn 2021, and an elite round that took place in spring 2022.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.