paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
A strain can occur as a result of improper body mechanics with any activity (e.g., contact sports, lifting heavy objects) that can induce mechanical trauma or injury. Generally, the muscle or tendon overstretches and is placed under more physical stress than it can withstand. Strains commonly result in a partial or complete tear of a tendon or muscle, or they can be severe in the form of a complete tendon rupture. Strains most commonly occur in the foot, leg, or back. Acute strains are more closely associated with recent mechanical trauma or injury. Chronic strains typically result from repetitive movement of the muscles and tendons over a long period of time.
Degrees of Injury (as classified by the American College of Sports Medicine):
First degree (mildest) – little tissue tearing; mild tenderness; pain with full range of motion.
Second degree – torn muscle or tendon tissues; painful, limited motion; possibly some swelling or depression at the spot of the injury.
Third degree (most severe) – limited or no movement; severe acute pain, though sometimes painless straight after the initial injury
|
Given this paragraph about muscle strains and that my symptoms include severe and acute pain with movement limitations, how bad is my strain?
|
You likely have a Grade 3 strain, often considered amongst the most severe types
|
null | false
| null |
Why does my chicken create so much liquid when cooked in the skillet?
|
Once you dice up your chicken and pop it into the skillet, try to leave it undisturbed for 3-4 minutes while it cooks. Once browned on that side, flip it and do not disturb it again until the other side is browned. This will help reduce the amount of liquid created in your skillet. If you do end up with too much liquid, and you want to reduce it due to seasonings and flavor that may be present, you can take the chicken out and put it on a plate. This will allow you not to overcook the chicken but also reduce the liquid left in the skillet and intensify the flavor that is left.
|
|
null | false
| 265
|
Categories such as animal or furniture are fundamental cognitive building blocks allowing humans to efficiently represent and communicate the complex world around them. Concepts (e.g., dog, table) are grouped into categories based on shared properties pertaining, for example, to their behavior, appearance, or function. Categorization underlies other cognitive functions such as perception BIBREF0 , BIBREF1 or language BIBREF2 , BIBREF3 , and there is evidence that categories are not only shaped by the world they represent, but also by the language through which they are communicated BIBREF4 , BIBREF5 . Although mental categories exist across communities and cultures, their exact manifestations differ BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 . For example, American English speakers prefer taxonomic categorizations (e.g., mouse,squirrel) while Chinese speakers tend to prefer to categorize objects relationally (e.g., tree, squirrel; BIBREF7 ).
Given their prevalent function in human cognition, the acquisition and representation of categories has attracted considerable attention in cognitive science, and numerous theories have emerged BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . Empirical studies of category acquisition and representation, have been predominantly based on small-scale laboratory experiments. In a typical experiment, human subjects are presented with small sets of often artificial concepts, such as binary strings BIBREF14 or colored shapes BIBREF15 , with strictly controlled features BIBREF16 , BIBREF17 , BIBREF18 . Hypotheses and principles of human categorization are established based on the processes and characteristics of the categorizations produced by the participants. The distribution of subjects participating in such studies is often skewed towards members of cultural and socioeconomic groups which are prevalent in the environment where the research is conducted, and typically consists to a large proportion of western, educated, wealthy and English-speaking participants often sampled from the even more specific population of college students. The demographic and socioeconomic bias has been long recognized, and the question of how this bias might impact conclusions about human cognition in general BIBREF19 and category learning specifically is under active debate BIBREF9 . Although laboratory studies are invaluable for understanding categorization phenomena in a controlled environment, they are also expensive and time-consuming to conduct, and consequently problematic to scale.
In this work, we scale the investigation of category learning and representation along two axes: (1) the complexity of the learning environment, and consequently the richness of learnable concept and category representations, and (2) the diversity of languages and cultures considered in evaluation. We present a novel knowledge-lean, cognitively motivated Bayesian model which learns categories and their structured features jointly from large natural language text corpora in five diverse languages: Arabic, Chinese, English, French, and German. We approximate the learning environment using large corpora of natural language text. Language has been shown to redundantly encode much of the non-linguistic information in the natural environment BIBREF20 , and to influence the emergence of categories BIBREF4 , BIBREF5 . Besides text corpora can cover arbitrarily semantically complex domains, and are available across languages, providing an ideal test environment for studying categorization at scale.
Figure 1 illustrates example input to our model, and Figure 2 shows example categories and associated features as induced by our model from the English Wikipedia. Following prior work BIBREF21 , BIBREF22 , we create language-specific sets of stimuli, each consisting of a mention of target concept (e.g., apple), within its local linguistic context (e.g., {contains, seeds}; cf., Figure 1 ). We consider each stimulus an observation of the concept, i.e., the word referring to the concept is an instance of the concept itself, and its context words are a representation of its features. Our model infers categories as groups of concepts occurring with similar features; and it infers feature types as groups of features which co-occur with each other. The output of our model (cf., Figure 2 ) are categories as clusters of concepts, each associated with a set of feature types, i.e., thematically coherent groups of features. We train a separate model on each of our target languages, each time presenting the model with input stimuli from the relevant language.
Computational models in general, and Bayesian models in particular, allow to investigate hypotheses about cognitive phenomena by systematically modifying the learning mechanism or available input while observing the learning outcome. Bayesian models have been applied to a variety of cognitive phenomena BIBREF23 , BIBREF24 , BIBREF25 , and category acquisition is no exception. Following from Anderson's BIBREF14 , BIBREF26 , BIBREF27 seminal work, a number of models have been developed, and tested in their ability to reproduce human behavior in laboratory settings by exposing the models to small sets of controlled inputs with restricted features. In this work we draw on the full potential of computational modeling by exposing our models to (a) more complex data reflecting the diversity of contexts in which concepts can be observed; and (b) a input data in different languages, shedding light on the applicability of computational cognitive models beyond the prevalent English test language.
Categorization tasks in a laboratory environment typically involve stimuli with a small set of features which are relevant to the categorization target, eliminating the need to detect features, and discriminate them in their relevance. In the real world, however, concepts are observed in contexts and a substantial part of acquiring categorical knowledge involves learning which features are useful to discriminate among concepts. In fact, research has shown that humans learn features jointly with categories BIBREF28 , BIBREF29 and that these features are themselves structured so as to represent the diversity and complexity of the properties exhibited in the world BIBREF30 , BIBREF31 , BIBREF32 . Our novel model of category learning presented in this article, jointly learns categories and their structured features from large sets of informationally rich data.
Our work exemplifies the opportunities that arise from computational models and large data sets for investigating the mechanisms with which conceptual representations emerge, as well as the representations themselves in a broader context. We simulate the acquisition of categories comprising hundreds of concepts by approximating the learning environment with natural language text. Language has been shown to redundantly encode much of the non-linguistic information in the natural environment BIBREF20 , as well as human-like biases BIBREF33 , and to influence the emergence of categories BIBREF4 , BIBREF5 . Text corpora are a prime example of naturally occurring large-scale data sets BIBREF34 , BIBREF35 , BIBREF36 . In analogy to real-world situations, they encapsulate rich, diverse, and potentially noisy, information. The wide availability of corpora allows us to train and evaluate cognitive models on data from diverse languages and cultures. We test our model on corpora from five languages, derived from the online encyclopedia Wikipedia in Arabic, Chinese, French, English, and German. Wikipedia is a valuable resource for our study because it (a) discusses concepts and their properties explicitly and can thus serve as a proxy for the environment speakers of a language are exposed to; and (b) allows us to construct corpora which are highly comparable in their content across languages, controlling for effects of genre or style.
We present a series of evaluations investigating the quality of the induced categories and features. Leveraging a reference comprising hundreds of concepts and more than 30 categories, we demonstrate that our model learns meaningful categories in all five target languages. We furthermore show, through crowd-sourced evaluations involving native speakers of each target language, that the induced feature types are (a) each thematically coherent and interpretable; and (b) are associated with categories in comprehensible ways. We discuss language-specific idiosyncrasies emerging from the induced representations.
In the remainder of this article, we first review related literature, before we present a cognitively motivated model for learning categories and their structured representations from large natural language corpora. We then evaluate the quality of the emerging representations, as well as the generalizability of our model across languages. Note that the primary goal of this work is not to characterize differences in categories and features arising from different languages (even though this would be an interesting avenue for future work). Rather, we aim to demonstrate the utility of large-scale naturalistic datasets for cognitive modeling, and to verify mechanisms of categorization known from laboratory studies at scale and across communities.
Based on their reports, in this paper, we propose a neural language model that utilizes character n-gram embeddings. Our proposed method encodes character n-gram embeddings to a word embedding with simplified Multi-dimensional Self-attention (MS).
|
What is the neural language model proposed by the authors?
|
It is a neural language model that utilizes character n-gram embeddings as an input in addition to a word embedding with simplified Multi-dimensional Self-attention (MS).
|
null | false
| null |
How do I decline an invitation to my Aunt's Birthday party?
|
Dear Aunt Ruth,
I was so excited when I received the invitation to your birthday party next Saturday. Celebrating you and your birthday is so important to me. You have always been one of those people in my life that has cheered me on. I've always know I have an ally in my corner with you in my life.
My excitement turned to sadness when I tried to find a replacement at work for next Saturday. No one was able to cover for me. I will have to miss being with you physically. Count on me joining you with my heart! I will be wishing you an awesome birthday and hoping for a year full of happiness and joy.
I love you Aunt Ruth. Thank you for the invitation. Please don't forget to invite me next year! I can wait for the next time we see each other.
Love,
Your Nephew
|
|
null | false
| null |
What are the primary causes of inflation?
|
In an economy with too much money (the money supply), the currency value decreases when the total amount of money increases.
Prices of goods/services increase (price inflation) when the money supply increases (monetary inflation) over a longer period of time.
Economists disagree on the causes over shorter periods of time.
|
|
null | false
| null |
After using the Gamebryo engine to create The Elder Scrolls III: Morrowind, The Elder Scrolls IV: Oblivion, and Fallout 3, Bethesda decided that Gamebryo's capabilities were becoming too outdated and began work on the Creation Engine for their next game, The Elder Scrolls V: Skyrim, by forking the codebase used for Fallout 3.
Following the completion of Skyrim, Bethesda set out to enhance the graphical core of the Creation Engine by first adding a physically based deferred renderer to allow for more dynamic lighting and to paint materials object surfaces with realistic materials. Bethesda worked with technology company Nvidia to implement volumetric lighting through a technique that makes use of hardware tesselation. Additionally the updated version of the Creation Engine powering Bethesda's Fallout 4 offers more advanced character generation.
|
What's the game engine used by Bethesda before Creation Engine? And what's the last game make with it?
|
Gamebryo was the engine used by Bethesda before Creation Engine. It is developed by Gamebase Cot., Ltd and Gamebase USA. Gamebryo is used by numerous companies within the gaming industry.
Bethesda decided that Gamebryo's capabilities were becoming too outdated and began work on the Creation Engine for their next game, The Elder Scrolls V: Skyrim, by forking the codebase used for Fallout 3.
The last game from Bethesda made with Gamebryo engine is Fallout 3 (2008).
|
null | false
| null |
Name some famous books on philosophy?
|
Meditations by Marcus Aurelius.
Man's Search for Meaning by Viktor Frankl.
The Story of Philosophy by Will Durant.
The Republic by Plato.
Discourses by Epictetus.
|
|
null | false
| null |
Are cats or dogs better?
|
The debate on if cats or dogs are better has been talked about for ages, there is no clear winner. Humans love being binary and needing to choose between one or the other. Each person is different, and can benefit from cats AND dogs in their life. You can like both cats and dogs, and like them equally as well.
|
|
null | false
| 383
|
Parkinson's disease (PD) is a neurodegenerative disorder characterized by the progressive loss of dopaminergic neurons in the mid-brain producing several motor and non-motor impairments in the patients BIBREF0. Motor symptoms include among others, bradykinesia, rigidity, resting tremor, micrographia, and different speech impairments. The speech impairments observed in PD patients are typically grouped as hypokinetic dysarthria, and include symptoms such as vocal folds rigidity, bradykinesia, and reduced control of muscles and limbs involved in the speech production. The effects of dysarthria in the speech of PD patients include increased acoustic noise, reduced intensity, harsh and breathy voice quality, increased voice nasality, monopitch, monoludness, speech rate disturbances, imprecise articulation of consonants BIBREF1, and involuntary introduction of pauses BIBREF2. Clinical observations in the speech of patients can be objectively and automatically measured by using computer aided methods supported in signal processing and pattern recognition with the aim to address two main aspects: (1) to support the diagnosis of the disease by classifying healthy control (HC) subjects and patients, and (2) to predict the level of degradation of the speech of the patients according to a specific clinical scale.
Most of the studies in the literature to classify PD from speech are based on computing hand-crafted features and using classifiers such as support vector machines (SVMs) or K-nearest neighbors (KNN). For instance, in BIBREF3, the authors computed features related to perturbations of the fundamental frequency and amplitude of the speech signal to classify utterances from 20 PD patients and 20 HC subjects, Turkish speakers. Classifiers based on KNN and SVMs were considered, and accuracies of up to 75% were reported. Later, in BIBREF4 the authors proposed a phonation analysis based on several time frequency representations to assess tremor in the speech of PD patients. The extracted features were based on energy and entropy computed from time frequency representations. Several classifiers were used, including Gaussian mixture models (GMMs) and SVMs. Accuracies of up to 77% were reported in utterances of the PC-GITA database BIBREF5, formed with utterances from 50 PD patients and 50 HC subjects, Colombian Spanish native speakers. The authors from BIBREF6 computed features to model different articulation deficits in PD such as vowel quality, coordination of laryngeal and supra-laryngeal activity, precision of consonant articulation, tongue movement, occlusion weakening, and speech timing. The authors studied the rapid repetition of the syllables /pa-ta-ka/ pronounced by 24 Czech native speakers, and reported an accuracy of 88% discriminating between PD patients and HC speakers, using an SVM classifier. Additional articulation features were proposed in BIBREF7, where the authors modeled the difficulty of PD patients to start/stop the vocal fold vibration in continuous speech. The model was based on the energy content in the transitions between unvoiced and voiced segments. The authors classified PD patients and HC speakers with speech recordings in three different languages (Spanish, German, and Czech), and reported accuracies ranging from 80% to 94% depending on the language; however, the results were optimistic, since the hyper-parameters of the classifier were optimized based on the accuracy on the test set. Another articulation model was proposed in BIBREF8. The authors considered a forced alignment strategy to segment the different phonetic units in the speech utterances. The phonemes were segmented and grouped to train different GMMs. The classification was performed based on a threshold of the difference between the posterior probabilities from the models created for HC subjects and PD patients. The model was tested with Colombian Spanish utterances from the PC-GITA database BIBREF5 and with the Czech data from BIBREF9. The authors reported accuracies of up to 81% for the Spanish data, and of up to 94% for the Czech data.
In addition to the hand-crafted feature extraction models, there is a growing interest in the research community to consider deep learning models in the assessment of the speech of PD patients BIBREF10, BIBREF11, BIBREF12. Deep learning methods have the potential to extract more abstract and robust features than those manually computed. These features could help to improve the accuracy of different models to classify pathological speech, such as PD BIBREF13. A deep learning based articulation model was proposed in BIBREF11 to model the difficulties of the patients to stop/start the vibration of the vocal folds. Transitions between voiced and unvoiced segments were modeled with time-frequency representations and convolutional neural networks (CNNs). The authors considered speech recordings of PD patients and HC speakers in three languages: Spanish, German, and Czech, and reported accuracies ranging from 70% to 89%, depending on the language. However, in a language independent scenario, i.e., training the CNN with utterances from one language and testing with the remaining two, the results were not satisfactory (accuracy$<60\%$).
The classification of PD from speech in different languages has to be carefully conducted to avoid bias towards the linguistic content present in each language. For instance, Czech and German languages are richer than Spanish language in terms of consonant production, which may cause that it is easier to produce consonant sounds by Czech PD patients than by Spanish PD patients. Despite these language dependent issues, the results in the classification of PD in different languages could be improved using a transfer learning strategy among languages, i.e., to train a base model with utterances from one language, and then, to perform a fine-tuning of the weights with utterances from the target language BIBREF14. Similar approaches based on transfer learning have been recently considered to classify PD using handwriting BIBREF15. In the present study, we propose a methodology to classify PD via a transfer learning strategy with the aim to improve the accuracy in different languages. CNNs trained with utterances from one language are used to initialize a model to classify speech utterances from PD patients in a different language. The models are evaluated with speech utterances in Spanish, German, and Czech languages. The results suggest that the use of a transfer learning strategy improved the accuracy of the models over 8% with respect to those obtained when the model is trained only with utterance from the target language.
Clinical observations in the speech of patients can be objectively and automatically measured by using computer aided methods supported in signal processing and pattern recognition with the aim to address two main aspects: (1) to support the diagnosis of the disease by classifying healthy control (HC) subjects and patients, and (2) to predict the level of degradation of the speech of the patients according to a specific clinical scale.
|
What is the aim of using computer-aided methods supported in signal processing and pattern recognition?
|
To address two main aspects: (1) to support the diagnosis of the disease by classifying healthy control (HC) subjects and patients, and (2) to predict the level of degradation of the speech of the patients according to a specific clinical scale.
|
null | false
| null |
What are some types of physical exercise?
|
Exercise can encompass: cardiovascular (aka, cardio), to improve blood flow/circulation and overall heart and lung health
lifting, to build or tone muscles (arms, legs, shoulders benefit from lifting)
core, to improve balance, coordination, and stability
stretching, to improve flexibility and tone (yoga, martial arts, barre, pilates)
endurance (swimming, cycling, running distances)
|
|
null | false
| null |
Write a poem about a little neurodiverse boy with amazing gifts.
|
I know a little boy called Shane
He has an amazing, unique brain
All the country flags of the world he knows, at a glance
Big numbers he adds and multiples, whenever he gets a chance
His curiosity is what fascinates me
And alas, his fantastic memory!
He sometimes struggles to make friends
And seldom cares about the latest fashion trends
He is a shining star, shining bright
He brings joy to everyone around him, and pure delight!
|
|
null | false
| 60
|
Informal speech is different from formal speech, especially in Vietnamese due to many conjunctive words in this language. Building an ASR model to handle such kind of speech is particularly difficult due to the lack of training data and also cost for data collection. There are two components of an ASR system that contribute the most to the accuracy of it, an acoustic model and a language model. While collecting data for acoustic model is time-consuming and costly, language model data is much easier to collect.
The language model training for the Automatic Speech Recognition (ASR) system usually based on corpus crawled on formal text, so that some conjunctive words which often used in conversation will be missed out, leading to the system is getting biased to writing-style speech.
In this paper, we present our attempt to mitigate the problems using a large scale data set and a language model combination technique that only require a small amount of conversation data but can still handle very well conversation speech.
In this paper, we present our attempt to mitigate the problems using a large scale data set and a language model combination technique that only require a small amount of conversation data but can still handle very well conversation speech.
|
What does the author use to solve the problems?
|
A large scale data set and a language model combination technique.
|
null | false
| null |
What wine varietal is the most popular produced in Oregon?
|
Oregon is well-known for its Pinot Noir. Many of the most awarded Pinot Noirs come from Willamette Valley.
|
|
null | false
| null |
What are the benefits of exercising and why one should workout everyday especially if you have a sedentary lifestyle that makes you sit all day long?
|
Based on various researches, it’s evident that health risks are at rise due to sedentary lifestyle. This has led to multiple health issues such as cardiovascular diseases, diabetes and obesity. By exercising, one can stay in shape and active especially if the workout is rigorous that makes your heart pump faster. Most doctors recommend to exercise at least half an hour everyday to stay fit.
|
|
null | false
| null |
Individuals whom the media refers to as members of the PayPal Mafia include:
Peter Thiel, PayPal founder and former chief executive officer who is sometimes referred to as the "don" of the PayPal Mafia
Max Levchin, founder and chief technology officer at PayPal
Elon Musk, co-founder of Zip2, founder of X.com which merged with Confinity to form PayPal. Musk later founded SpaceX, co-founded Tesla, Inc., co-founded OpenAI, Neuralink, founded The Boring Company, and became owner of Twitter, Inc.
David O. Sacks, former PayPal COO who later founded Geni.com and Yammer
Scott Banister, early advisor and board member at PayPal.
Roelof Botha, former PayPal CFO who later became a partner and Senior Steward of venture capital firm Sequoia Capital
Steve Chen, former PayPal engineer who co-founded YouTube.
Reid Hoffman, former executive vice president who later founded LinkedIn and was an early investor in Facebook, Aviary
Ken Howery, former PayPal CFO who became a partner at Founders Fund
Chad Hurley, former PayPal web designer who co-founded YouTube
Eric M. Jackson, who wrote the book The PayPal Wars and became chief executive officer of WND Books and co-founded CapLinked
Jawed Karim, former PayPal engineer who co-founded YouTube
Jared Kopf, former PayPal (executive assistant to Peter Thiel) who co-founded Slide, HomeRun and NextRoll
Dave McClure, former PayPal marketing director, a super angel investor for start up companies
Andrew McCormack, co-founder of Valar Ventures
Luke Nosek, PayPal co-founder and former vice president of marketing and strategy, became a partner at Founders Fund with Peter Thiel and Ken Howery
Keith Rabois, former executive at PayPal who later worked at LinkedIn, Slide, Square, Khosla Ventures, and currently with Peter Thiel at Founders Fund, and personally invested in Tokbox, Xoom, Slide, LinkedIn, Geni, Room 9 Entertainment, YouTube, and Yelp
Jack Selby, former vice president of corporate and international development at PayPal who co-founded Clarium Capital with Peter Thiel, later becoming managing director of Grandmaster Capital Management
Premal Shah, former product manager at PayPal, became the founding president of Kiva.org
Russel Simmons, former PayPal engineer who co-founded Yelp Inc.
Jeremy Stoppelman, former vice president of technology at PayPal who later co-founded Yelp
Yishan Wong, former engineering manager at PayPal, later worked at Facebook and became the CEO of Reddit
|
What was the relationship of the person who became MD at Grandmaster Capital Management to Paypal?
|
Jack Selby was former vice president of corporate and international development at PayPal.
|
2001.05467
| false
| null |
FLOAT SELECTED: Table 1: Automatic Evaluation Activity/Entity F1 results for baselines and our 3 models (attn means “with attention”). LSTM, HRED and VHRED are reported in Serban et al. (2017a), VHRED (attn) and Reranking-RL in Niu and Bansal (2018a), and the rest are produced by our work. All our four models have statistically significantly higher F1 values (p < 0.001) against VHRED (attn) and MMI.
FLOAT SELECTED: Table 1: Automatic Evaluation Activity/Entity F1 results for baselines and our 3 models (attn means “with attention”). LSTM, HRED and VHRED are reported in Serban et al. (2017a), VHRED (attn) and Reranking-RL in Niu and Bansal (2018a), and the rest are produced by our work. All our four models have statistically significantly higher F1 values (p < 0.001) against VHRED (attn) and MMI.
|
To what other competitive baselines is this approach compared?
|
LSTMs with and without attention, HRED, VHRED with and without attention, MMI and Reranking-RL
|
null | false
| null |
Do aliens exist?
|
Although there is no definitive evidence for the existence of extraterrestrial life, pictures and videos of UFOs, although of low quality, have proven to many people that not only does extraterrestrial life exist, but it is visiting Earth regularly.
|
|
null | false
| 143
|
The explosion of available scientific articles in the Biomedical domain has led to the rise of Biomedical Information Extraction (BioIE). BioIE systems aim to extract information from a wide spectrum of articles including medical literature, biological literature, electronic health records, etc. that can be used by clinicians and researchers in the field. Often the outputs of BioIE systems are used to assist in the creation of databases, or to suggest new paths for research. For example, a ranked list of interacting proteins that are extracted from biomedical literature, but are not present in existing databases, can allow researchers to make informed decisions about which protein/gene to study further. Interactions between drugs are necessary for clinicians who simultaneously administer multiple drugs to their patients. A database of diseases, treatments and tests is beneficial for doctors consulting in complicated medical cases.
The main problems in BioIE are similar to those in Information Extraction:
This paper discusses, in each section, various methods that have been adopted to solve the listed problems. Each section also highlights the difficulty of Information Extraction tasks in the biomedical domain.
This paper is intended as a primer to Biomedical Information Extraction for current NLP researchers. It aims to highlight the diversity of the various techniques from Information Extraction that have been applied in the Biomedical domain. The state of biomedical text mining is reviewed regularly. For more extensive surveys, consult BIBREF0 , BIBREF1 , BIBREF2 .
The main problems in BioIE are similar to those in Information Extraction: 1. Named Entity Recognition 2. Relation Extraction 3. Event Extraction
|
Is there a similarity between BioIE and Information Extraction?
|
Yes, there is.
|
null | false
| 8
|
We acknowledge the support of NVIDIA Corporation for their donation of one GeForce GTX TITAN X GPU used for this research.
We acknowledge the support of NVIDIA Corporation for their donation of one GeForce GTX TITAN X GPU used for this research.
|
What did NVIDIA Corporation donate for the research?
|
One GeForce GTX TITAN X GPU.
|
null | false
| null |
Taylor Alison Swift (born December 13, 1989) is an American singer-songwriter. Her genre-spanning discography, songwriting and artistic reinventions have received critical praise and wide media coverage. Born in West Reading, Pennsylvania, Swift moved to Nashville at age 14 to become a country artist. She signed a songwriting deal with Sony/ATV Music Publishing in 2004 and a recording contract with Big Machine Records in 2005. Her 2006 self-titled debut album made her the first female country artist (singer) to write a U.S. platinum-certified album.
Swift's next albums, Fearless (2008) and Speak Now (2010), explored country pop. The former's "Love Story" and "You Belong with Me" were the first country songs to top the U.S. pop and all-genre airplay charts, respectively. She experimented with rock and electronic styles on Red (2012), which featured her first Billboard Hot 100 number-one song, "We Are Never Ever Getting Back Together", and eschewed her country image in her synth-pop album, 1989 (2014), supported by chart-topping songs "Shake It Off", "Blank Space", and "Bad Blood". Media scrutiny inspired the urban-flavored Reputation (2017) and its number-one single "Look What You Made Me Do".
|
Based on the reference text, which album helped Taylor Swift shift from a country star to more of a pop star?
|
Taylor Swift's 1989 album was her first pop album, helping change her image from a country star.
|
null | false
| 83
|
We developed the annotation guidelines jointly with an experienced annotator, who is a native Arabic speaker with a good knowledge of various Arabic dialects. We made sure that our guidelines were compatible with those of OffensEval2019. The annotator carried out all annotation. Tweets were given one or more of the following four labels: offensive, vulgar, hate speech, or clean. Since the offensive label covers both vulgar and hate speech and vulgarity and hate speech are not mutually exclusive, a tweet can be just offensive or offensive and vulgar and/or hate speech. The annotation adhered to the following guidelines:
Offensive tweets contain explicit or implicit insults or attacks against other people, or inappropriate language, such as:
Direct threats or incitement, ex: احرقوا> مقرات المعارضة> (“AHrqwA mqrAt AlmEArDp” – “burn the headquarters of the opposition”) and هذا المنافق يجب قتله> (“h*A AlmnAfq yjb qtlh” – “this hypocrite needs to be killed”).
Insults and expressions of contempt, which include: Animal analogy, ex: يا كلب> (“yA klb” – “O dog”) and كل تبن> (“kl tbn” – “eat hay”).; Insult to family members, ex: يا روح أمك> (“yA rwH Amk” – “O mother's soul”); Sexually-related insults, ex: يا ديوث> (“yA dywv” – “O person without envy”); Damnation, ex: الله يلعنك> (“Allh ylEnk” – “may Allah/God curse you”); and Attacks on morals and ethics, ex: يا كاذب> (“yA kA*b” – “O liar”)
Vulgar tweets are a subset of offensive tweets and contain profanity, such as mentions of private parts or sexual-related acts or references.
Hate speech tweets, a subset of offensive tweets containing offensive language targeting group based on common characteristics such as: Race, ex: يا زنجي> (“yA znjy” – “O negro”); Ethnicity, ex. الفرس الأنجاس> (“Alfrs AlAnjAs” – “Impure Persians”); Group or party, ex: أبوك شيوعي> (“Abwk $ywEy” – “your father is communist”); and Religion, ex: دينك القذر> (“dynk Alq*r” – “your filthy religion”).
Clean tweets do not contain vulgar or offensive language. We noticed that some tweets have some offensive words, but the whole tweet should not be considered as offensive due to the intention of users. This suggests that normal string match without considering contexts will fail in some cases. Examples of such ambiguous cases include: Humor, ex: يا عدوة الفرحة ههه> (“yA Edwp AlfrHp hhh” – “O enemy of happiness hahaha”); Advice, ex: لا تقل لصاحبك يا خنزير> (“lA tql lSAHbk yA xnzyr” – “don't say to your friend: You are a pig”); Condition, ex: إذا عارضتهم يقولون يا عميل> (“A*A EArDthm yqwlwn yA Emyl” – “if you disagree with them they will say: You are an agent”); Condemnation, ex: لماذا نسب بقول: يا بقرة؟> (“lmA*A nsb bqwl: yA bqrp?” – “Why do we insult others by saying: O cow?”); Self offense, ex: تعبت من لساني القذر> (“tEbt mn lsAny Alq*r” – “I am tired of my dirty tongue”); Non-human target, ex: يا بنت المجنونة يا كورة> (“yA bnt Almjnwnp yA kwrp” – “O daughter of the crazy one O football”); and Quotation from a movies or a story, ex: تاني يا زكي! تاني يا فاشل> (“tAny yA zky! tAny yA fA$l” – “again O Zaky! again O loser”). For other ambiguous cases, the annotator searched Twitter to find how actual users used expressions.
Table TABREF11 shows the distribution of the annotated tweets. There are 1,915 offensive tweets, including 225 vulgar tweet and 506 hate speech tweets, and 8,085 clean tweets. To validate the quality of annotation, a random sample of 100 tweets from the data, containing 50 offensive and 50 clean tweets, was given to additional three annotators. We calculated the Inter-Annotator Agreement between the annotators using Fleiss’s Kappa coefficient BIBREF23. The Kappa score was 0.92 indicating high quality annotation and agreement.
OFFENSIVE (OFF): Offensive tweets contain explicit or implicit insults or attacks against other people, or inappropriate language, such as: Direct threats or incitement, ex: é PAªÖÏ@ H@Q ®Ó @ñ ¯Qk@ (“AHrqwA mqrAt AlmEArDp” – “burn opposition headquar_x0002_ters”) and ¯A JÖÏ@ @ Yë @ñÊ J ¯@ (“h*A AlmnAfq yjb qtlh” – “kill this hypocrite”).
|
How to define OFFENSIVE (OFF) in the data?
|
OFFENSIVE (OFF): Offensive tweets contain explicit or implicit insults or attacks against other people, or inappropriate language, such as: Direct threats or incitement.
|
null | false
| null |
Tell me whether these are solid or gas at room temperature: oxygen, nitrogen, carbon dioxide.
|
Oxygen, nitrogen and carbon dioxide are gases at room temperature.
|
|
null | false
| null |
Atlassian Corporation (/ətˈlæsiən/) is an Australian software company that develops products for software developers, project managers and other software development teams. The company is domiciled in Delaware, with global headquarters in Sydney, Australia, and US headquarters in San Francisco.
|
Where was Atlassian founded?
|
Atlassian Corporation was founded in Australia, but is domeciled in Delaware, with global headquarters in Sydney, Australia, and US headquarters in San Francisco.
|
1909.01515
| false
| null |
We use two datasets, NELL-One and Wiki-One which are constructed by BIBREF11 . NELL-One and Wiki-One are derived from NELL BIBREF2 and Wikidata BIBREF0 respectively. Furthermore, because these two benchmarks are firstly tested on GMatching which consider both learned embeddings and one-hop graph structures, a background graph is constructed with relations out of training/validation/test sets for obtaining the pre-train entity embeddings and providing the local graph for GMatching.
We use two datasets, NELL-One and Wiki-One which are constructed by BIBREF11 .
|
What datasets are used to evaluate the approach?
|
NELL-One, Wiki-One
|
null | false
| null |
Which is a species of fish? Soldierfish or Soldier Boy
|
Soldierfish
|
|
null | false
| null |
Mary Campbell "Mollie" Dawbarn (5 January 1902 – 24 May 1982) was an Australian biochemist and nutritional physiologist. She is particularly well known for here research on B complex vitamins. Producing an assay for vitamin b12 and perfecting the methods for estimating B1 vitamin in bread are among her most notable works.
|
Extract the most famous research of Mary Campbell Dawbarn, and put them in a list.
|
- Producing an assay for vitamin b12
- Perfecting the methods for estimating B1 vitamin in bread
|
null | false
| 69
|
Due to its recent success, neural network models have been used with competitive results on abstractive summarization. A neural attention model was first applied to the task, easily achieving state-of-the-art performance on multiple datasets BIBREF5 . The model has been extended to instead use recurrent neural network as decoder BIBREF3 . The model was further extended to use a full RNN encoder-decoder framework and further enhancements through lexical and statistical features BIBREF15 . The current state-of-the-art performance is achieved by selectively encoding words as a process of distilling salient information BIBREF13 .
Neural abstractive summarization models have also been explored to summarize longer documents. Word extraction models have been previously explored, performing worse than sentence extraction models BIBREF19 . Hierarchical attention-based recurrent neural networks have also been applied to the task, owing to the idea that there are multiple sentences in a document BIBREF15 . Finally, distraction-based models were proposed to enable models to traverse the text content and grasp the overall meaning BIBREF4 . The current state-of-the-art performance is achieved by a graph-based attentional neural model, considering the key factors of document summarization such as saliency, fluency and novelty BIBREF14 .
Previous studies on the summarization tasks have only used entities in the preprocessing stage to anonymize the dataset BIBREF15 and to mitigate out-of-vocabulary problems BIBREF14 . Linked entities for summarization are still not properly explored and we are the first to use linked entities to improve the performance of the summarizer.
Linked entities for summarization are still not properly explored and we are the first to use linked entities to improve the performance of the summarizer.
|
Are the authors the first to use linked entities to improve the performance of the summarizer?
|
Yes.
|
null | false
| null |
Give me some ideas of what to cook for dinner, I am vegetarian and I have picky children
|
As a parent, I have some ideas for quick weeknight meals that should appeal to the whole family.
Fried rice can be an easy one pan dinner incorporating vegetables, protein, healthy fats, and carbohydrates making it a balanced option for families. Try frying some onion and garlic in some coconut oil. While that cooks, chop up the children's most loved vegetables to throw in - I would recommend carrots, peppers, and sweet peas. Give that a few minutes to cook down then add in your cooked rice with some low sodium soy sauce and candied ginger syrup. Before serving, crack and egg into the pan and mix with the rice until cooked.
Another great family friendly option that can be modified for vegetarians is shepherds pie. Try substituting the beef/lamb with either cooked lentils or Impossible Beef. The cheesy potatoes and savory filling makes for a family favorite! This can be made ahead of time and popped in the over 45 minutes before serving for an easy weeknight meal.
|
|
null | false
| 166
|
Named Entity Linking is the task of detecting mentions of entities from a knowledge base in free text, as illustrated in Figure 1 .
Most of the entity linking literature focuses on target knowledge bases which are derived from Wikipedia, such as DBpedia BIBREF0 or YAGO BIBREF1 . These bases are curated automatically by harvesting information from the info-boxes and categories on each Wikipedia page and are therefore not editable directly.
Wikidata BIBREF2 is an editable, multilingual knowledge base which has recently gained popularity as a target database for entity linking BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . As these new approaches to entity linking also introduce novel learning methods, it is hard to tell apart the benefits that come from the new models and those which come from the choice of knowledge graph and the quality of its data.
We review the main differences between Wikidata and static knowledge bases extracted from Wikipedia, and analyze their implactions for entity linking. We illustrate these differences by building a simple entity linker, OpenTapioca, which only uses data from Wikidata, and show that it is competitive with other systems with access to larger data sources for some tasks. OpenTapioca can be trained easily from a Wikidata dump only, and can be efficiently kept up to date in real time as Wikidata evolves. We also propose tools to adapt existing entity linking datasets to Wikidata, and offer a new entity linking dataset, consisting of affiliation strings extracted from research articles.
We review the main differences between Wikidata and static knowledge bases extracted from Wikipedia, and analyze their implactions for entity linking. We illustrate these differences by building a simple entity linker, OpenTapioca, which only uses data from Wikidata, and show that it is competitive with other systems with access to larger data sources for some tasks.
|
What do they build to illustrate the main differences between Wikidata and static knowledge bases extracted from Wikipedia?
|
A simple entity linker, OpenTapioca.
|
null | false
| null |
how does the water cycle work?
|
The water cycle has no starting point, but the oceans seem to be a good starting point. The sun warms the water in the oceans; it evaporates into the air. Rising air currents carry the vapour into the atmosphere, where the lower temperatures cause the vapour to condense into clouds.
|
|
null | false
| null |
are GSIs important to being a global company
|
yes
|
|
2001.06888
| false
| null |
In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets.
In BIBREF8 a refined collection of tweets gathered from twitter is presented. Their dataset which is labeled for named entity recognition task contains 8,257 tweets. There are 12,784 entities in total in this dataset. Table TABREF19 shows statistics related to each named entity in training, development and test sets.
|
What datasets did they use?
|
The answers are shown as follows:
* BIBREF8 a refined collection of tweets gathered from twitter
|
null | false
| null |
What is the best boba order to place from a local boba store for a company that has around 300 employees in the office?
|
In order to serve up to 300 people, it's best to order a couple of different tea flavors to fit everyone's taste. People may also have different sweetness and ice preferences, but it can be hard to account for so many preferences, so a good default would be to set every drink to half sweetness and half ice.
A possible order could be: 20 cups of black milk tea, 20 cups of green milk tea, 20 cups of oolong milk tea, 20 cups of mango green tea, 20 cups of passionfruit green tea, 20 cups of matcha latte, 20 cups of taro milk tea, and 20 cups of Thai milk tea. It's best to ask for the boba on the side so that it stays fresh during delivery. Since this is a local store, it would be also wise to place the order ahead of time so that they aren't overwhelmed with the order size.
This order accounts for many different tastes and reduces the likelihood of over-ordering, as it's possible that not all employees will want boba.
|
|
null | false
| 291
|
Coreference resolution is the task of identifying all mentions which refer to the same entity in a document. It has been shown beneficial in many natural language processing (NLP) applications, including question answering BIBREF0 and information extraction BIBREF1 , and often regarded as a prerequisite to any text understanding task.
Coreference resolution can be regarded as a clustering problem: each cluster corresponds to a single entity and consists of all its mentions in a given text. Consequently, it is natural to evaluate predicted clusters by comparing them with the ones annotated by human experts, and this is exactly what the standard metrics (e.g., MUC, B INLINEFORM0 , CEAF) do. In contrast, most state-of-the-art systems are optimized to make individual co-reference decisions, and such losses are only indirectly related to the metrics.
One way to deal with this challenge is to optimize directly the non-differentiable metrics using reinforcement learning (RL), for example, relying on the REINFORCE policy gradient algorithm BIBREF2 . However, this approach has not been very successful, which, as suggested by clark-manning:2016:EMNLP2016, is possibly due to the discrepancy between sampling decisions at training time and choosing the highest ranking ones at test time. A more successful alternative is using a `roll-out' stage to associate cost with possible decisions, as in clark-manning:2016:EMNLP2016, but it is computationally expensive. Imitation learning BIBREF3 , BIBREF4 , though also exploiting metrics, requires access to an expert policy, with exact policies not directly computable for the metrics of interest.
In this work, we aim at combining the best of both worlds by proposing a simple method that can turn popular coreference evaluation metrics into differentiable functions of model parameters. As we show, this function can be computed recursively using scores of individual local decisions, resulting in a simple and efficient estimation procedure. The key idea is to replace non-differentiable indicator functions (e.g. the member function INLINEFORM0 ) with the corresponding posterior probabilities ( INLINEFORM1 ) computed by the model. Consequently, non-differentiable functions used within the metrics (e.g. the set size function INLINEFORM2 ) become differentiable ( INLINEFORM3 ). Though we assume that the scores of the underlying statistical model can be used to define a probability model, we show that this is not a serious limitation. Specifically, as a baseline we use a probabilistic version of the neural mention-ranking model of P15-1137, which on its own outperforms the original one and achieves similar performance to its global version BIBREF5 . Importantly when we use the introduced differentiable relaxations in training, we observe a substantial gain in performance over our probabilistic baseline. Interestingly, the absolute improvement (+0.52) is higher than the one reported in clark-manning:2016:EMNLP2016 using RL (+0.05) and the one using reward rescaling (+0.37). This suggests that our method provides a viable alternative to using RL and reward rescaling.
The outline of our paper is as follows: we introduce our neural resolver baseline and the B INLINEFORM0 and LEA metrics in Section SECREF2 . Our method to turn a mention ranking resolver into an entity-centric resolver is presented in Section SECREF3 , and the proposed differentiable relaxations in Section SECREF4 . Section SECREF5 shows our experimental results.
Specifically, as a baseline we use a probabilistic version of the neural mention-ranking model of P15-1137, which on its own outperforms the original one and achieves similar performance to its global version.
|
What model do they use in their method?
|
A probabilistic version of the neural mention-ranking model.
|
1902.06843
| false
| null |
For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.
Facial Expression:
Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.
General Image Features:
The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).
Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)
For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization.
Facial Expression:
Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images.
General Image Features:
The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 .
Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone.
It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)
|
What types of features are used from each data type?
|
The answers are shown as follows:
* facial presence
* Facial Expression
* General Image Features
* textual content
* analytical thinking
* clout
* authenticity
* emotional tone
* Sixltr
* informal language markers
* 1st person singular pronouns
|
null | false
| null |
What skis should I buy?
|
There are many aspects to consider when buying a new pair of skis. The first and probably most important would be to consider you ability level. Ski companies typical group skiers into one of three categories: Level 1 (Beginner), Level 2 (Intermediate) or Level 3 (Advanced). Considering you ability level will help you decide on other factors to consider when purchasing a ski, such as the length, underfoot width, dimensions, camber and rocker.
Another important consideration to make when purchasing skis is to think about the type of terrain that you like to ski. Do you prefer skiing all-mountain conditions, powder conditions or backcountry conditions?
Based on your answers to the previous two questions, you should be able to narrow the universe of skis that you'd consider for purchase. Next, you should use your height to determine the length of the ski that would fit you well.
After you've made a decision on what type of skis you'd like to buy, you should consider what type of bindings and boots you would like to use with your skis.
|
|
null | false
| null |
The Moluccan hanging parrot (Loriculus amabilis) is a species of parrot in the family Psittaculidae. It is endemic to forest and nearby habitats on Halmahera, Bacan and Morotai in Indonesia. It has sometimes included the Sula hanging parrot as a subspecies, but the two are increasingly treated as separate species based on their distinct differences in plumage and size (11 cm for the Moluccan hanging parrot versus 14 cm for the Sula hanging parrot).
|
What is a Moluccan hanging parrot?
|
The Moluccan hanging parrot is a species of parrot in the family Psittaculidae. It is endemic to forest and nearby habitats on Halmahera, Bacan and Morotai in Indonesia.
|
null | false
| null |
Give me a list of 5 first date ideas
|
- Grab coffee
- Dine out for a meal
- Do something active, like a hike
- Go to the museum
- Have a picninc
|
|
1608.02195
| false
| null |
Bag-of-words feature vectors were used to train a multinomial logistic regression model. Let INLINEFORM0 be the true label, where INLINEFORM1 is the total number of labels and INLINEFORM2 is the concatenation of the weight vectors INLINEFORM3 associated with the INLINEFORM4 th party then DISPLAYFORM0
Bag-of-words feature vectors were used to train a multinomial logistic regression model. Let INLINEFORM0 be the true label, where INLINEFORM1 is the total number of labels and INLINEFORM2 is the concatenation of the weight vectors INLINEFORM3 associated with the INLINEFORM4 th party then DISPLAYFORM0
|
What model are the text features used in to provide predictions?
|
The answers are shown as follows:
* multinomial logistic regression
|
null | false
| null |
When is the best time to transplant bearded iris?
|
Bearded iris are one of the most popular perennial flowers in the American landscape. They have attractive spiky foliage that is evergreen in many Zones and provide beautiful blooms multiple times a year. They are cold hardy and are easily propagated and transplanted.
The best time of the year to transplant bearded iris is late summer. This provides the plant with sufficient time to establish a new root structure before the winter. Bearded iris grow from rhizomes, which quickly grow roots that provide the plant with the nutrients required to grow and form flowers. The rhizomes should be removed from the soil using a garden fork making an attempt to keep as much root structure as possible. The spiky leaves of the plant can be trimmed at this time to reduce the risk of the plant being disturbed by strong wind gusts when it is replanted. Plant the rhizome in its new location so that only about a half of the rhizome is buried in the soil. This enables the rhizome to receive direct sunlight and more vigorously replenish its stored starches and proteins.
|
|
null | false
| null |
Classify these foods as either "italian", "french", "mexican" or "other". Crepes Suzette, Parmigiana di melanzane, La paella de marisco, Tahchin, Tacos al pastor, Soupe à l'oignon, Enchiladas, Pappardelle al ragù.
|
italian: Parmigiana di melanzane, Pappardelle al ragù
french: Crepes Suzette, Soupe à l'oignon
mexican: Tacos al pastor, Enchiladas
other: La paella de marisco, Tahchin
|
|
null | false
| 502
|
In this work, we address an open challenge in Offline RL. In the resource-constrained setting that is motivated by real world applications, the features available during training offline may be different than the limited features available online during deployment. We highlighted a performance gap between offline RL agents trained using only the online features and agents trained using all the offline features. To bridge this gap, we proposed a student-teacher-based policy transfer learning approach. The proposed algorithm improves over the baseline significantly even when the dataset quality is lacking in the resource-constrained setting. The simplicity of the approach (with just one additional hyperparameter) makes it easy to extend it to other offline RL algorithms. It would be interesting to study other transfer learning approaches (e.g., policy transfer with other divergence regularizations for stochastic policies) in the future. Moreover, we observe that the proposed approach benefits especially when the dataset quality is low which is often the case with real world datasets. Despite this, the performance gap with the teacher is still high (in the low quality data regime) and this suggests more tailored approaches are required.
In this work, we address an open challenge in Offline RL. In the resource-constrained setting that is motivated by real world applications, the features available during training offline may be different than the limited features available online during deployment.
|
While the proposed setting makes sense in imitating real world settings, would it not be useful to have high quality trajectories in order to best leverage the action choices that come from these high quality offline features. Why not stick with D4RL?
|
As the reviewer rightly points out, the setting of RC-D4RL imitates real-world settings, and is therefore more practical for evaluating the resource-constrained setting algorithms.
|
null | false
| null |
The 2022 Indian Premier League, also known as IPL 15 or for sponsorship reasons, TATA IPL 2022, was the fifteenth season of the Indian Premier League (IPL), a professional Twenty20 cricket league established by the Board of Control for Cricket in India (BCCI) in 2007. The tournament was played from 26 March 2022 to 29 May 2022. The group stage of the tournament was played entirely in the state of Maharashtra, with Mumbai, Navi Mumbai and Pune hosting matches.
The season saw the expansion of the league with the addition of two new franchises. Chennai Super Kings were the defending champions, having won their fourth title during the previous season.
In the final one of the new franchises, Gujarat Titans, beat Rajasthan Royals by seven wickets to win their first title.
|
Extract one of the franchise introduced in IPL 2022 from the text and who won the title during the previous session.
|
The IPL 2022 season saw the expansion of the league with the addition of two new franchises, and one of them was Gujarat Titans. Gujarat Titans, beat Rajasthan Royals by seven wickets to win their first title.
Also, Chennai Super Kings were the defending champions, having won their fourth title during the previous season.
|
null | false
| null |
What is Ralph Baer famous for?
|
Invented video games
|
|
null | false
| null |
Merlot (/ˈmɜːrloʊ/ MUR-loh) is a dark blue–colored wine grape variety, that is used as both a blending grape and for varietal wines. The name Merlot is thought to be a diminutive of merle, the French name for the blackbird, probably a reference to the color of the grape. Its softness and "fleshiness," combined with its earlier ripening, make Merlot a popular grape for blending with the sterner, later-ripening Cabernet Sauvignon, which tends to be higher in tannin.
|
What is Merlot?
|
Merlot (/ˈmɜːrloʊ/ MUR-loh) is a dark blue–colored wine grape variety, that is used as both a blending grape and for varietal wines.
|
null | false
| 109
|
CNN can automatically extract key features from the training data. It grasps contextual local features from a sentence and, after several convolution operations, it forms a global feature vector out of those local features. CNN does not need the hand-crafted features used in traditional supervised classifiers. Such hand-crafted features are difficult to compute and a good guess for encoding the features is always necessary in order to get satisfactory results. CNN, instead, uses a hierarchy of local features which are important to learn context. The hand-crafted features often ignore such a hierarchy of local features.
Features extracted by CNN can therefore be used instead of hand-crafted features, as they carry more useful information. The idea behind convolution is to take the dot product of a vector of INLINEFORM0 weights INLINEFORM1 also known as kernel vector with each INLINEFORM2 -gram in the sentence INLINEFORM3 to obtain another sequence of features INLINEFORM4 . DISPLAYFORM0
Thus, a max pooling operation is applied over the feature map and the maximum value INLINEFORM0 is taken as the feature corresponding to this particular kernel vector. Similarly, varying kernel vectors and window sizes are used to obtain multiple features BIBREF32 . For each word INLINEFORM1 in the vocabulary, a INLINEFORM2 -dimensional vector representation is given in a look up table that is learned from the data BIBREF33 . The vector representation of a sentence, hence, is a concatenation of vectors for individual words. Similarly, we can have look up tables for other features. One might want to provide features other than words if these features are suspected to be helpful. The convolution kernels are then applied to word vectors instead of individual words.
We use these features to train higher layers of the CNN, in order to represent bigger groups of words in sentences. We denote the feature learned at hidden neuron INLINEFORM0 in layer INLINEFORM1 as INLINEFORM2 . Multiple features may be learned in parallel in the same CNN layer. The features learned in each layer are used to train the next layer: DISPLAYFORM0
where * indicates convolution and INLINEFORM0 is a weight kernel for hidden neuron INLINEFORM1 and INLINEFORM2 is the total number of hidden neurons. The CNN sentence model preserves the order of words by adopting convolution kernels of gradually increasing sizes that span an increasing number of words and ultimately the entire sentence. As mentioned above, each word in a sentence is represented using word embeddings.
We employ the publicly available word2vec vectors, which were trained on 100 billion words from Google News. The vectors are of dimensionality 300, trained using the continuous bag-of-words architecture BIBREF33 . Words not present in the set of pre-trained words are initialized randomly. However, while training the neural network, we use non-static representations. These include the word vectors, taken as input, into the list of parameters to be learned during training.
Two primary reasons motivated us to use non-static channels as opposed to static ones. Firstly, the common presence of informal language and words in tweets resulted in a relatively high random initialization of word vectors due to the unavailability of these words in the word2vec dictionary. Secondly, sarcastic sentences are known to include polarity shifts in sentimental and emotional degrees. For example, “I love the pain present in breakups" is a sarcastic sentence with a significant change in sentimental polarity. As word2vec was not trained to incorporate these nuances, we allow our models to update the embeddings during training in order to include them. Each sentence is wrapped to a window of INLINEFORM0 , where INLINEFORM1 is the maximum number of words amongst all sentences in the dataset. We use the output of the fully-connected layer of the network as our feature vector.
We have done two kinds of experiments: firstly, we used CNN for the classification; secondly, we extracted features from the fully-connected layer of the CNN and fed them to an SVM for the final classification. The latter CNN-SVM scheme is quite useful for text classification as shown by Poria et al. BIBREF18 . We carry out n-fold cross-validation on the dataset using CNN. In every fold iteration, in order to obtain the training and test features, the output of the fully-connected layer is treated as features to be used for the final classification using SVM. Table TABREF12 shows the training settings for each CNN model developed in this work. ReLU is used as the non-linear activation function of the network. The network configurations of all models developed in this work are given in Table TABREF12 .
CNN can automatically extract key features from the training data. It grasps contextual local features from a sentence and, after several convolution operations, it forms a global feature vector out of those local features. CNN does not need the hand-crafted features used in traditional supervised classifiers.
|
What's the advantage of CNN?
|
CNN can automatically extract key features from the training data and does not need the hand-crafted features used in traditional supervised classifiers.
|
null | false
| null |
What are the top 3 items that I need when traveling by plane.
|
When packing for this trip, you should pack:
1. An empty reusable water bottle to fill at the airport so you can stay hydrated.
2. A packable airplane travel pillow so that you can sleep comfortably on the flight.
3. A snack in case you get hungry.
|
|
null | false
| null |
how to get started with data science and machine learning?
|
To be a good data scientist, you need a solid understanding of statistics and machine learning algorithms like linear regression, logistic regression, and tree-based models. Knowledge and experiences with deep learning concepts are a big plus nowadays in the job market. In addition, business acumen and good communication skills can help you bridge the gap between technical and business stakeholders. Here are a few recommended study materials: Coursera course Machine Learning by DeepLearnin.AI; Introduction to Machine Learning with Python: A Guide for Data Scientists; Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. But most importantly, keep learning and start now!
|
|
null | false
| null |
Robert Kelly Slater (born February 11, 1972) is an American professional surfer, best known for being crowned World Surf League champion a record 11 times. Slater is widely regarded as the greatest professional surfer of all time, and holds 56 Championship Tour victories. Slater is also the oldest surfer still active on the World Surf League, winning his 8th Billabong Pipeline Masters title at age 49.
|
Who is the best surfer in the world
|
Kelly Slater
|
1610.09516
| false
| null |
Motivated by recent work involving the use of emojis by gang members BIBREF22 , we also studied if and how gang and non-gang members use emoji symbols in their tweets. Our analysis found that gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior through their tweets. Figure FIGREF24 illustrates the emoji distribution for the top 20 most frequent emojis used by gang member profiles in our dataset. The fuel pump emoji was the most frequently used emoji by the gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji is the second most frequent in our dataset, which is often used with the guardsman emoji or the police cop emoji in an `emoji chain'. Figure FIGREF28 presents some prototypical `chaining' of emojis used by gang members. The chains may reflect their anger at law enforcement officers, as a cop emoji is often followed up with the emoji of a weapon, bomb, or explosion. We found that 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members. Moreover, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them. A variety of the angry face emoji such as devil face emoji and imp emoji were also common in gang member tweets. The frequency of each emoji symbol used across the set of user's tweets are thus considered as features for our classifier.
Our analysis found that gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior through their tweets. Figure FIGREF24 illustrates the emoji distribution for the top 20 most frequent emojis used by gang member profiles in our dataset. The fuel pump emoji was the most frequently used emoji by the gang members, which is often used in the context of selling or consuming marijuana. The pistol emoji is the second most frequent in our dataset, which is often used with the guardsman emoji or the police cop emoji in an `emoji chain'. Figure FIGREF28 presents some prototypical `chaining' of emojis used by gang members. The chains may reflect their anger at law enforcement officers, as a cop emoji is often followed up with the emoji of a weapon, bomb, or explosion. We found that 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members. Moreover, only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them. A variety of the angry face emoji such as devil face emoji and imp emoji were also common in gang member tweets.
|
What are the differences in the use of emojis between gang member and the rest of the Twitter population?
|
The answers are shown as follows:
* 32.25% of gang members in our dataset have chained together the police and the pistol emoji, compared to just 1.14% of non-gang members
* only 1.71% of non-gang members have used the hundred points emoji and pistol emoji together in tweets while 53% of gang members have used them
* gang members have a penchant for using just a small set of emoji symbols that convey their anger and violent behavior
|
null | false
| null |
Name five NBA teams in the Western Conference.
|
Five NBA teams in the Western Conference are the Phoenix Suns, Memphis Grizzlies, Portland Trail Blazers, San Antonio Spurs, and Oklahoma City Thunder.
|
|
null | false
| null |
Dorrie Nossiter (29 June 1893 – 1977) was an English jeweller and jewellery designer from Aston, near Birmingham.
Nossiter crafted precious jewellery of her own designs in the English Arts and Crafts Tradition in both sterling silver and gold. Her work is known for her use of colour and floral and curvature lines using gemstones in motifs. She was predominantly active during the 1930s.
Nossiter was educated at the Municipal School of Art in Birmingham from 1910 to 1914. Nossiter married Ernest Guy Robinson in 1922. By 1935 she was living in London where her work was shown in the "Art by Four Women" exhibition at Walker's Gallery, London. Nossiter would go on to exhibit there from 1935 to 1939.
Nossiter's work is often confused with that of another female jeweller and jewellery designer of the same period, Sibyl Dunlop.
|
What was the name of Dorrie Nossiter's exhibition?
|
Dorrie Nossiter showed her work in the "Art by Four Women" exhibition at the Walker's Gallery in London, England.
|
null | false
| null |
Why does a boat float on water whereas some other objects sink?
|
Objects with less buoyant force will sink. Buoyancy is a force acting on an object that causes it to rise or move higher in the liquid. If the object is denser than the water then it sinks and if the object is less dense than the water then it floats.
|
|
1911.03842
| false
| null |
One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21. For example, all instances of grandmother are swapped with grandfather.
One of the solutions that has been proposed for mitigating gender bias on the word embedding level is Counterfactual Data Augmentation (CDA) BIBREF25. We apply this method by augmenting our dataset with a copy of every dialogue with gendered words swapped using the gendered word pair list provided by BIBREF21.
|
How does counterfactual data augmentation aim to tackle bias?
|
The training dataset is augmented by swapping all gendered words by their other gender counterparts
|
null | false
| 183
|
Even though machine translation has improved considerably with the advent of neural machine translation (NMT) BIBREF0 , BIBREF1 , the translation of pronouns remains a major issue. They are notoriously hard to translate since they often require context outside the current sentence.
As an example, consider the sentences in Figure FIGREF1 . In both languages, there is a pronoun in the second sentence that refers to the European Central Bank. When the second sentence is translated from English to German, the translation of the pronoun it is ambiguous. This ambiguity can only be resolved with context awareness: if a translation system has access to the previous English sentence, the previous German translation, or both, it can determine the antecedent the pronoun refers to. In this German sentence, the antecedent Europäische Zentralbank dictates the feminine gender of the pronoun sie.
It is unfortunate, then, that current NMT systems generally operate on the sentence level BIBREF2 , BIBREF3 , BIBREF4 . Documents are translated sentence-by-sentence for practical reasons, such as line-based processing in a pipeline and reduced computational complexity. Furthermore, improvements of larger-context models over baselines in terms of document-level metrics such as BLEU or RIBES have been moderate, so that their computational overhead does not seem justified, and so that it is hard to develop more effective context-aware architectures and empirically validate them.
To address this issue, we present an alternative way of evaluating larger-context models on a test set that allows to specifically measure a model's capability to correctly translate pronouns. The test suite consists of pairs of source and target sentences, in combination with contrastive translation variants (for evaluation by model scoring) and additional linguistic and contextual information (for further analysis). The resource is freely available. Additionally, we evaluate several context-aware models that have recently been proposed in the literature on this test set, and extend existing models with parameter tying.
The main contributions of our paper are:
Section SECREF2 explains how our paper relates to existing work on context-aware models and the evaluation of pronoun translation. Section SECREF3 describes our test suite. The context-aware models we use in our experiments are detailed in Section SECREF4 . We discuss our experiments in Section SECREF5 and the results in Section SECREF6 .
The test suite consists of pairs of source and target sentences, in combination with contrastive translation variants (for evaluation by model scoring) and additional linguistic and contextual information (for further analysis).
|
What does the test suite consist of?
|
The test suite consists of pairs of source and target sentences.
|
null | false
| 156
|
Topic identification (topic ID) on speech aims to identify the topic(s) for given speech recordings, referred to as spoken documents, where the topics are a predefined set of classes or labels. This task is typically formulated as a three-step process. First, speech is tokenized into words or phones by automatic speech recognition (ASR) systems BIBREF0 , or by limited-vocabulary keyword spotting BIBREF1 . Second, standard text-based processing techniques are applied to the resulting tokenizations, and produce a vector representation for each spoken document, typically a bag-of-words multinomial representation, or a more compact vector given by probabilistic topic models BIBREF2 , BIBREF3 . Finally, topic ID is performed on the spoken document representations by supervised training of classifiers, such as Bayesian classifiers and support vector machines (SVMs).
However, in the first step, training the ASR system required for tokenization itself requires transcribed speech and pronunciations. In this paper, we focus on a difficult and realistic scenario where the speech corpus of a test language is annotated only with a minimal number of topic labels, i.e., no manual transcriptions or dictionaries for building an ASR system are available. We aim to exploit approaches that enable topic ID on speech without any knowledge of that language other than the topic annotations.
In this scenario, while previous work demonstrates that the cross-lingual phoneme recognizers can produce reasonable speech tokenizations BIBREF4 , BIBREF5 , the performance is highly dependent on the language and environmental condition (channel, noise, etc.) mismatch between the training and test data. Therefore, we focus on unsupervised approaches that operate directly on the speech of interest. Raw acoustic feature-based unsupervised term discovery (UTD) is one such approach that aims to identify and cluster repeating word-like units across speech based around segmental dynamic time warping (DTW) BIBREF6 , BIBREF7 . BIBREF8 shows that using the word-like units from UTD for spoken document classification can work well; however, the results in BIBREF8 are limited since the acoustic features on which UTD is performed are produced by acoustic models trained from the transcribed speech of its evaluation corpus. In this paper, we investigate UTD-based topic ID performance when UTD operates on language-independent speech representations extracted from multilingual bottleneck networks trained on languages other than the test language BIBREF9 . Another alternative to producing speech tokenizations without language dependency is the model-based approach, i.e., unsupervised learning of hidden Markov model (HMM) based phoneme-like units from untranscribed speech. We exploit the Variational Bayesian inference based acoustic unit discovery (AUD) framework in BIBREF10 that allows parallelized large-scale training. In topic ID tasks, such AUD-based systems have been shown to outperform other systems based on cross-lingual phoneme recognizers BIBREF5 , and this paper aims to further investigate how the performance compares among UTD, AUD and ASR based systems.
Moreover, after the speech is tokenized, these works BIBREF0 , BIBREF1 , BIBREF4 , BIBREF5 , BIBREF8 , BIBREF9 are limited to using bag-of-words features as spoken document representations. While UTD only identifies relatively long (0.5 – 1 sec) repeated terms, AUD/ASR enables full-coverage segmentation of continuous speech into a sequence of units/words, and such a resulting temporal sequence enables another feature learning architecture based on convolutional neural networks (CNNs) BIBREF11 ; instead of treating the sequential tokens as a bag of acoustic units or words, the whole token sequence is encoded as concatenated continuous vectors, and followed by convolution and temporal pooling operations that capture the local and global dependencies. Such continuous space feature extraction frameworks have been used in various language processing tasks like spoken language understanding BIBREF12 , BIBREF13 and text classification BIBREF14 , BIBREF15 . However, three questions are worth investigating in our AUD-based setting: (i) if such a CNN-based framework can perform as well on noisy automatically discovered phoneme-like units as on orthographic words/characters, (ii) if pre-trained vectors of phoneme-like units from word2vec BIBREF16 provide superior performance to random initialization as evidenced by the word-based tasks, and (iii) if CNNs are still competitive in low-resource settings of hundreds to two-thousand training exemplars, rather than the large/medium sized datasets as in previous work BIBREF14 , BIBREF15 .
Finally, incorporating the different tokenization and feature representation approaches noted above, we perform comprehensive topic ID evaluations on both single-label and multi-label spoken document classification tasks.
We exploit the Variational Bayesian inference based acoustic unit discovery (AUD) framework in that allows parallelized large-scale training.
|
What framework do the authors use?
|
Variational Bayesian inference based acoustic unit discovery (AUD) framework.
|
null | false
| null |
What makes The Blade Itself by Joe Abercrombie good?
|
I re-read this book 7 years later, after recently reading book 4 and book 5 of the series.
Gosh I've forgotten how excellent the first book is. It's hilarious, grim/dark, witty, with a fantastic cast of characters and a nuanced worldbuilding that I'm dying to uncover.
|
|
null | false
| 111
|
Sentiment analysis has recently been one of the hottest topics in natural language processing (NLP). It is used to identify and categorise opinions expressed by reviewers on a topic or an entity. Sentiment analysis can be leveraged in marketing, social media analysis, and customer service. Although many studies have been conducted for sentiment analysis in widely spoken languages, this topic is still immature for Turkish and many other languages.
Neural networks outperform the conventional machine learning algorithms in most classification tasks, including sentiment analysis BIBREF0. In these networks, word embedding vectors are fed as input to overcome the data sparsity problem and make the representations of words more “meaningful” and robust. Those embeddings indicate how close the words are to each other in the vector space model (VSM).
Most of the studies utilise embeddings, such as word2vec BIBREF1, which take into account the syntactic and semantic representations of the words only. Discarding the sentimental aspects of words may lead to words of different polarities being close to each other in the VSM, if they share similar semantic and syntactic features.
For Turkish, there are only a few studies which leverage sentimental information in generating the word and document embeddings. Unlike the studies conducted for English and other widely-spoken languages, in this paper, we use the official dictionaries for this language and combine the unsupervised and supervised scores to generate a unified score for each dimension of the word embeddings in this task.
Our main contribution is to create original and effective word vectors that capture syntactic, semantic, and sentimental characteristics of words, and use all of this knowledge in generating embeddings. We also utilise the word2vec embeddings trained on a large corpus. Besides using these word embeddings, we also generate hand-crafted features on a review basis and create document vectors. We evaluate those embeddings on two datasets. The results show that we outperform the approaches which do not take into account the sentimental information. We also had better performances than other studies carried out on sentiment analysis in Turkish media. We also evaluated our novel embedding approaches on two English corpora of different genres. We outperformed the baseline approaches for this language as well. The source code and datasets are publicly available.
The paper is organised as follows. In Section 2, we present the existing works on sentiment classification. In Section 3, we describe the methods proposed in this work. The experimental results are shown and the main contributions of our proposed approach are discussed in Section 4. In Section 5, we conclude the paper.
We also evaluated our novel embedding approaches on two English corpora of different genres. We outperformed the baseline approaches for this language as well. The source code and datasets are publicly available
|
For English corpora, does their approach outperform the baseline?
|
Yes.
|
1905.13413
| false
| null |
Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision. Ideally, the resulting model after one-round of training becomes better not only at confidence modeling, but also at assertion generation, suggesting that extractions of higher quality can be added as training samples to continue this training process iteratively. The resulting iterative learning procedure (alg:iter) incrementally includes extractions generated by the current model as training samples to optimize the binary classification loss to obtain a better model, and this procedure is continued until convergence. [t] training data $\mathcal {D}$ , initial model $\theta ^{(0)}$ model after convergence $\theta $ $t \leftarrow 0$ # iteration
A key step in open IE is confidence modeling, which ranks a list of candidate extractions based on their estimated quality. This is important for downstream tasks, which rely on trade-offs between the precision and recall of extracted assertions. For instance, an open IE-powered medical question answering (QA) system may require its assertions in higher precision (and consequently lower recall) than QA systems for other domains. For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model BIBREF3 , BIBREF5 . However, we observe that this often yields sub-optimal ranking results, with incorrect extractions of one sentence having higher likelihood than correct extractions of another sentence. We hypothesize this is due to the issue of a disconnect between training and test-time objectives. Specifically, the system is trained solely to raise likelihood of gold-standard extractions, and during training the model is not aware of its test-time behavior of ranking a set of system-generated assertions across sentences that potentially include incorrect extractions.
We follow the evaluation metrics described by Stanovsky:2016:OIE2016: area under the precision-recall curve (AUC) and F1 score. An extraction is judged as correct if the predicate and arguments include the syntactic head of the gold standard counterparts.
Compared to using external models for confidence modeling, an advantage of the proposed method is that the base model does not change: the binary classification loss just provides additional supervision.
For supervised open IE systems, the confidence score of an assertion is typically computed based on its extraction likelihood given by the model BIBREF3 , BIBREF5
We follow the evaluation metrics described by Stanovsky:2016:OIE2016: area under the precision-recall curve (AUC) and F1 score.
|
How does this compare to traditional calibration methods like Platt Scaling?
|
No reliability diagrams are provided and no explicit comparison is made between confidence scores or methods.
|
null | false
| null |
The 2000s saw the rise of first Liverpool, and then Arsenal to real competitiveness, Chelsea finally breaking the duopoly by winning in 2004-05. The dominance of the so-called "Top Four" clubs. Arsenal, Chelsea, Liverpool and Manchester United saw them finish at the top of the table for the bulk of the decade, thereby guaranteeing qualification for the UEFA Champions League. Only three other clubs managed to qualify for the competition during this period: Newcastle United (2001–02 and 2002–03), Everton (2004–05) and Tottenham Hotspur (2009–10) – each occupying the final Champions League spot, with the exception of Newcastle in the 2002–03 season, who finished third.
Following the 2003–04 season, Arsenal acquired the nickname "The Invincibles" as it became the first, and to date, only club to complete a Premier League campaign without losing a single game.
In May 2008, Kevin Keegan stated that "Top Four" dominance threatened the division: "This league is in danger of becoming one of the most boring but great leagues in the world." Premier League chief executive Richard Scudamore said in defence: "There are a lot of different tussles that go on in the Premier League depending on whether you're at the top, in the middle or at the bottom that make it interesting."
Between 2005 and 2012 there was a Premier League representative in seven of the eight Champions League finals, with only "Top Four" clubs reaching that stage. Liverpool (2005), Manchester United (2008) and Chelsea (2012) won the competition during this period, with Arsenal (2006), Liverpool (2007), Chelsea (2008) and Manchester United (2009 and 2011) all losing Champions League finals. Leeds United were the only non-"Top Four" side to reach the semi-finals of the Champions League, in the 2000–01 season. There were three Premier League teams in the Champions League semi-finals in 2006–07, 2007–08, and 2008–09, a feat only ever achieved five times (along with Serie A in 2002–03 and La Liga in 1999–2000).
Additionally, between the 1999–2000 and 2009–10 seasons, four Premier League sides reached UEFA Cup or Europa League finals, with only Liverpool managing to win the competition in 2001. Arsenal (2000), Middlesbrough (2006) and Fulham (2010) all lost their finals.
Although the group's dominance was reduced to a degree after this period with the emergence of Manchester City and Tottenham, in terms of all-time Premier League points won they remain clear by some margin. As of the end of the 2021–22 season – the 27th season of the Premier League – Liverpool, in fourth place in the all-time points table, were over 300 points ahead of the next team, Tottenham Hotspur. They are also the only teams to maintain a winning average of over 50% throughout their entire Premier League tenures.
|
Who are the top 4 in EPL?
|
Arsenal, Liverpool, Manchester United and Chelsea are know as the "Top Four" in the English Premier league.
|
null | false
| null |
What batting traits make a good baseball or softball hitter?
|
There are many traits that great baseball and softball batters share. Some of the most important are extreme confidence, sound batting mechanics, and coordination developed though a combination of talent and practice.
All great hitters share some consistencies in their batting mechanics such as their lower half mechanics, their hand and bat path, and their head and eye movement.
With their lower half mechanics they generate powerful torque from the ground up by timing the planting their feet in a balanced position and forcefully rotating and opening their hips. Hitting powerful line drives relies heavily on the barrel of the bat meeting the ball flush. Batters increase the odds of this by keeping their hands in, connected to the back shoulder, and having one palm facing up and one palm facing down for as long as possible as the bat path travels through the strike zone. Finally, all great hitters maintain consistent head and eye position throughout the motion.
There are many nuances of batting mechanics that all great hitters share but these are some of the most important.
|
|
null | false
| null |
Xenohormones are found in a variety of different consumer products, agricultural products, and chemicals. Common sources of Xenohormones include:
Contraceptives and Hormone Therapies
Xenohormones and xenoestrogens are commonly used in oral contraceptives such as birth control pills and hormone replacement therapies due to their similarities to natural hormones.
Agriculture
Synthetic estrogenic drugs such as the bovine growth hormone (BVG) are commonly used to increase the size of cattle and maximize the amount of meat and dairy product that can come from them. Xenohormones are also found in certain pesticides, herbicides, and fungicides.
Plastics
Xenohormones are found in almost all plastics, and they appear in many consumer products that use plastic elements or plastic packaging. Common xenohormones in plastics and other industrial compounds include BPA, Phthalates, PVC, and PCBs. These can be found in several household items, including plastic dishes and utensils, Styrofoam, cling wrap, flooring, toys, and other items containing plastic or plasticizers. In 2000, the FDA banned the use of phthalates in baby toys due to health concerns.
Cleaning and Cosmetic Products
Many household products can contain certain xenohormones, including laundry detergent, fabric softeners, soap, shampoo, toothpaste, makeup and cosmetic products, feminine hygiene products
|
What is Xenohormone normally used for?
|
Xenohormone is widely used for different applications, including: Contraceptives and Hormone Therapies, Agriculture, Plastics, and Cleaning and Cosmetic Products.
|
null | false
| null |
Dublin is the largest centre of education in Ireland, and is home to four universities and a number of other higher education institutions. It was the European Capital of Science in 2012.
The University of Dublin is the oldest university in Ireland, dating from the 16th century, and is located in the city centre. Its sole constituent college, Trinity College (TCD), was established by Royal Charter in 1592 under Elizabeth I. It was closed to Roman Catholics until 1793, and the Catholic hierarchy then banned Roman Catholics from attending until 1970. It is situated in the city centre, on College Green, and has over 18,000 students.
The National University of Ireland (NUI) has its seat in Dublin, which is also the location of the associated constituent university of University College Dublin (UCD), which has over 30,000 students. Founded in 1854, it is now the largest university in Ireland. UCD's main campus is at Belfield, about 5 km (3 mi) from the city centre, in the southeastern suburbs.
As of 2019, Dublin's principal, and Ireland's largest, institution for technological education and research, Dublin Institute of Technology (DIT), with origins in 1887, has merged with two major suburban third level institutions, Institute of Technology, Tallaght and Institute of Technology, Blanchardstown, to form Technological University Dublin, Ireland's second largest university by student population. The new university offers a wide range of courses in areas include engineering, architecture, the sciences, health, journalism, digital media, hospitality, business, art and design, music and the humanities programmes, and has three long-term campuses, at Grangegorman, Tallaght and Blanchardstown.
Dublin City University (DCU), formerly the National Institute for Higher Education (NIHE) Dublin, offers courses in business, engineering, science, communication courses, languages and primary education. It has around 16,000 students, and its main campus is located about 7 km (4 mi) from the city centre, in the northern suburbs. Aside from the main Glasnevin Campus, the Drumcondra campuses includes the former St. Patrick's College of Education, Drumcondra now also hosting students from the nearby Mater Dei Institute of Education and students from the Church of Ireland College of Education at the DCU Campus at All Hallows College.
The Royal College of Surgeons in Ireland (RCSI) conducts a medical school which is both a university (since 2019) and a recognised college of the NUI, and is situated at St. Stephen's Green in the city centre; there are also large medical schools within UCD and Trinity College. The Royal College of Surgeons in Ireland, which is also known as RCSI University of Medicine and Health Sciences, Ireland's first private university. It was established in 1784 as the national body for the surgical branch of medicine in Ireland, with a role in supervision of training, and as of 2021 provides a broad range of medical education in multiple countries.
|
List the universities in Dublin in order of founding year ascending. Supply the answer as an enumerated list in the format {University name}, {Year of founding}
|
1. Trinity College Dublin (TCD), 1592
2. Royal College of Surgeons in Ireland (RCSI), 1784
3. University College Dublin (UCD), 1854
4. Dublin Institute of Technology (DIT), 1887
5. Dublin City University (DCU), 1975
|
null | false
| null |
SoundBridge is a hardware device from Roku, Inc. designed to play internet radio or digital audio streamed across a home network, over either Wi-Fi or ethernet. SoundBridge devices directly browsed the Radio Roku guide. As of 2008 all Roku SoundBridge products were discontinued; Roku focused on IPTV. As of January 2012, the SoundBridge was no longer available from Roku. As of May 2018, internet radio functionality was no longer supported by Roku.
The music is made available by a streaming server, usually a PC running media software. The SoundBridge had a high resolution vacuum fluorescent display and was compatible with various media servers, namely servers using Apple Computer's Digital Audio Access Protocol; popular servers are iTunes, or mt-daapd, Windows Media Connect, Rhapsody, SlimServer and UPnP (Universal Plug and Play) compatible servers such as TwonkyVision. Some of these servers run not only on PCs, but also on NAS devices like the Linksys NSLU2, so a SoundBridge could be operated without a PC.
|
Which company designed SoundBridge?
|
The company that designed SoundBridge is Roku, Inc.
|
1909.06434
| false
| null |
All baselines are Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers. For initial multi-task experiments, all model parameters were shared BIBREF12, but performance was down by multiple BLEU points compared to the baselines. As the source language pair is the same for both tasks, in subsequent experiments, only the encoder is shared BIBREF5. For En-Fr, 10% dropout is applied as in BIBREF11. After observing severe overfitting on En-De in early experiments, the rate is increased to 25% for this lower-resource task. All models are trained on 16 GPUs, using Adam optimizer with a learning rate schedule (inverse square root BIBREF11) and warmup.
All baselines are Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers. For initial multi-task experiments, all model parameters were shared BIBREF12, but performance was down by multiple BLEU points compared to the baselines.
|
What baselines non-adaptive baselines are used?
|
The answers are shown as follows:
* Transformer models in their base configuration BIBREF11, using 6 encoder and decoder layers, with model and hidden dimensions of 512 and 2048 respectively, and 8 heads for all attention layers
|
null | false
| null |
Which of the following are real vs. mythical animals: dove, unicorn, Komodo dragon, Sasquatch, cockatrice, snipe
|
Real: dove, Komodo dragon, snipe
Mythical: unicorn, Sasquatch, cockatrice
Note that asking someone to go on a 'snipe hunt' to trick them into searching for something that does not exist. So 'snipe' in this context is something that is not real, but there is in fact a real snipe family of birds
|
|
null | false
| null |
What is the largest country in the world?
|
There are multiple ways that large can be defined. By area, Russia is the largest country in the world alternatively by Population, China is the worlds most populated country.
|
|
null | false
| null |
How is interest paid on a series I bond?
|
Interest is paid through compounding effect. Unlike most bonds which pay "coupons" that are attached to the bond, series I bonds will pay via compounding interest accrued.
|
|
null | false
| null |
How do I run faster?
|
Running faster requires a lot of practice and a good baseline of fitness. You can develop the baseline of fitness by building a regular habit of long slow running for up to an hour. You can then improve your speed by including Tempo and Interval workouts in your regular training plan. It may be helpful to work with a coach or use a training platform like TrainingPeaks to build a plan which meets your needs.
|
|
null | false
| 249
|
As mentioned earlier, although research on cyberbullying detection is more limited than social studies on the phenomenon, some important advances have been made in recent years. In what follows, we present a brief overview of the most important natural language processing approaches to cyberbullying detection.
Although some studies have investigated the effectiveness of rule-based modelling BIBREF31 , the dominant approach to cyberbullying detection involves machine learning. Most machine learning approaches are based on supervised BIBREF32 , BIBREF33 , BIBREF34 or semi-supervised learning BIBREF35 . The former involves the construction of a classifier based on labeled training data, whereas semi-supervised approaches rely on classifiers that are built from a training corpus containing a small set of labeled and a large set of unlabelled instances (a method that is often used to handle data sparsity). As cyberbullying detection essentially involves the distinction between bullying and non-bullying posts, the problem is generally approached as a binary classification task where the positive class is represented by instances containing (textual) cyberbullying, while the negative class includes instances containing non-cyberbullying or `innocent' text.
A key challenge in cyberbullying research is the availability of suitable data, which is necessary to develop models that characterise cyberbullying. In recent years, only a few datasets have become publicly available for this particular task, such as the training sets provided in the context of the CAW 2.0 workshop and more recently, the Twitter Bullying Traces dataset BIBREF36 . As a result, several studies have worked with the former or have constructed their own corpus from social media websites that are prone to bullying content, such as YouTube BIBREF32 , BIBREF33 , Formspring BIBREF33 , and ASKfm BIBREF37 (the latter two are social networking sites where users can send each other questions or respond to them). Despite the bottleneck of data availability, existing approaches to cyberbullying detection have shown its potential, and the relevance of automatic text analysis techniques to ensure child safety online has been recognised BIBREF38 , BIBREF39 .
Among the first studies on cyberbullying detection are BIBREF34 , BIBREF31 , BIBREF33 , who explored the predictive power of INLINEFORM0 -grams (with and without tf-idf weighting), part-of-speech information (e.g. first and second pronouns), and sentiment information based on profanity lexicons for this task. Similar features were also exploited for the detection of cyberbullying events and fine-grained text categories related to cyberbullying BIBREF37 , BIBREF40 . More recent studies have demonstrated the added value of combining such content-based features with user-based information, such as including users' activities on a social network (i.e., the number of posts), their age, gender, location, number of friends and followers, and so on BIBREF32 , BIBREF35 , BIBREF41 . Moreover, semantic features have been explored to further improve classification performance of the task. To this end, topic model information BIBREF42 , as well as semantic relations between INLINEFORM1 -grams (according to a Word2Vec model BIBREF43 ) have been integrated.
As mentioned earlier, data collection remains a bottleneck in cyberbullying research. Although cyberbullying has been recognised as a serious problem (cf. Section SECREF1 ), real-world examples are often hard to find in public platforms. Naturally, the vast majority of communications do not contain traces of verbal aggression or transgressive behaviour. When constructing a corpus for machine learning purposes, this results in imbalanced datasets, meaning that one class (e.g. cyberbullying posts) is much less represented in the corpus than the other (e.g. non-cyberbullying posts). To tackle this problem, several studies have adopted resampling techniques BIBREF35 , BIBREF41 , BIBREF31 that create synthetic minority class examples or reduce the number of negative class examples (i.e., minority class oversampling and majority class undersampling BIBREF44 ).
Table TABREF9 presents a number of recent studies on cyberbullying detection, providing insight into the state of the art in cyberbullying research and the contribution of the current research to the domain.
The studies discussed in this section have demonstrated the feasibility of automatic cyberbullying detection in social media data by making use of a varied set of features. Most of them have, however, focussed on cyberbullying `attacks', or posts written by a bully. Moreover, it is not entirely clear if different forms of cyberbullying have been taken into account (e.g. sexual intimidation or harassment, or psychological threats), in addition to derogatory language or insults.
In the research described in this paper, cyberbullying is considered a complex phenomenon consisting of different forms of harmful behaviour online, which are described in more detail in our annotation scheme BIBREF30 . Purposing to facilitate manual monitoring efforts on social networks, we develop a system that automatically detects signals of cyberbullying, including attacks from bullies, as well as victim and bystander reactions. Similarly, BIBREF42 investigated bullying traces posted by different author roles (accuser, bully, reporter, victim). However, they collected tweets by using specific keywords (i.e., bully, bullied and bullying). As a result, their corpus contains many reports or testimonials of a cyberbullying incident (example 1), instead of actual signals that cyberbullying is going on. Moreover, their method implies that cyberbullying-related content devoid of such keywords will not be part of the training corpus.
`Some tweens got violent on the n train, the one boy got off after blows 2 the chest... Saw him cryin as he walkd away :( bullying not cool' BIBREF42
For this research, English and Dutch social media data were annotated for different forms of cyberbullying, based on the actors involved in a cyberbullying incident. After preliminary experiments for Dutch BIBREF37 , BIBREF40 , we currently explore the viability of detecting cyberbullying-related posts in Dutch and English social media. To this end, binary classification experiments are performed exploiting a rich feature set and optimised hyperparameters.
font=footnotesize,sc,justification=centering,labelsep=period
A key challenge in cyberbullying research is the availability of suitable data, which is necessary to develop models that characterise cyberbullying.
|
What is the key challenge in cyberbullying research?
|
The key challenge is the availability of suitable data.
|
1909.05017
| false
| null |
A low WER would indicate that a model-generated question is similar to the reference question, while a high WER means that the generated question differs significantly from the reference question. It should be noted that a high WER does not necessarily invalidate the generated question, as different questions can have the same answers, and there could be various ways of phrasing the same question. On the other hand, a situation with low WER of 1 could be due to the question missing a pertinent word to convey the correct idea. Despite these shortcomings in using WER as a metric of success, the WER can reflect our model's effectiveness in generating questions that are similar to those of SQuAD based on the given reading passage and answer. WER can be used for initial analyses that can lead to deeper insights as discussed further below.
Despite these shortcomings in using WER as a metric of success, the WER can reflect our model's effectiveness in generating questions that are similar to those of SQuAD based on the given reading passage and answer. WER can be used for initial analyses that can lead to deeper insights as discussed further below.
|
Why did they choose WER as evaluation metric?
|
The answers are shown as follows:
* WER can reflect our model's effectiveness in generating questions that are similar to those of SQuAD
* WER can be used for initial analyses
|
null | false
| 40
|
Electroencephalography (EEG) recordings of brain activity taken while participants read or listen to language are widely used within the cognitive neuroscience and psycholinguistics communities as a tool to study language comprehension. Several time-locked stereotyped EEG responses to word-presentations -- known collectively as event-related potentials (ERPs) -- are thought to be markers for semantic or syntactic processes that take place during comprehension. However, the characterization of each individual ERP in terms of what features of a stream of language trigger the response remains controversial. Improving this characterization would make ERPs a more useful tool for studying language comprehension. We take a step towards better understanding the ERPs by fine-tuning a language model to predict them. This new approach to analysis shows for the first time that all of the ERPs are predictable from embeddings of a stream of language. Prior work has only found two of the ERPs to be predictable. In addition to this analysis, we examine which ERPs benefit from sharing parameters during joint training. We find that two pairs of ERPs previously identified in the literature as being related to each other benefit from joint training, while several other pairs of ERPs that benefit from joint training are suggestive of potential relationships. Extensions of this analysis that further examine what kinds of information in the model embeddings relate to each ERP have the potential to elucidate the processes involved in human language comprehension.
However, the characterization of each individual ERP in terms of what features of a stream of language trigger the response remains controversial. Improving this characterization would make ERPs a more useful tool for studying language comprehension. We take a step towards better understanding the ERPs by fine-tuning a language model to predict them.
|
How to predict the characterization of each individual ERP?
|
By fine-tuning a language model.
|
null | false
| null |
Why many people do not like pigeons?
|
Like most wild animals pigeons might spread disease that people are susceptible to. In addition pigeons droppings might significantly pollute the city and be a cause of infection. As the pigeon population increases through out the years these concerns becoming important.
|
|
null | false
| null |
Imagine you are the last person alive. You're an astronaut and alone and in space aboard the ISS. Write down your feelings and what happened to get there, with a detailed backstory.
|
Finally, it happened. I never imagined the day would come, a final drop in a bucket for all mankind. I owe it to myself... no, I owe it to at least someone, to someone who can remember the human legacy, if there even is one in the future, to write down what happened, to give at least some semblance to the horrors unfolding below me.
I was always a starry-eyed child, looking at the heavens above and wondering, what's up there? And my dreams came true, first the flight school, then a military pilot, but everything made sense after my former instructor at the academy offered me a spot in the NASA Astronaut program. I couldn't believe it. I'm going to space? Really?
A few hard years later I was a regular member of a mission flying back and forth between Earth and ISS. Sitting down on top of a rocket strapped down inside of a tiny capsule is not fun, I can tell you that, but the rush of adrenaline when you're stuck to your seat flying through the atmosphere never gets old.
But this time, the feeling was - different. I arrived at the ISS on schedule and relieved my colleague Max. She was happy to see me, after all, 6 months spinning around the globe alone would do that to you. She was a dear friend of mine, but we didn't have much time for catchup and pleasantries. She was scheduled to go back on the same day, and left soon after - with a long goodbye.
I always dreamed of a better world, where humans would work together for the betterment of mankind. I loved Star Trek as a child, a place where differences didn't matter and everyone was working for benefit of everyone. But we built a different world. A world of greed, a world of selfishness, a world where having more than others meant being better than another.
We never really chose our leaders, or if we did, the fault is ours. A few years after the second pandemic, tensions rose. Rich were getting richer, the poor were getting poorer, and nothing was ever enough. And some people craved power. They wanted more. More land, more resources, more money.
They launched their missiles first. At least it seemed that way from the media that was still running at the time. Didn't matter though, our defenses were inadequate. Watching through my tiny window I could see the planet burning. We, of course, fired back. I could see flash after flash hitting the ground. Cities burning. Billions dying. But in the end, it didn't matter. We all lost.
Slowly, a massive cloud started covering the planet. It started from the places that were hit, but over weeks it spread throughout the globe. Once a beautiful pale blue dot is now nothing more that a grey patch in the sky. I could still hear some amateur radio stations from remote regions, trying to communicate and let someone know they are alive, but those were few and far between. Months later, they all died out.
And now, I'm here, in the slowly decaying orbit of the International Space Station, alone, waiting for the replacement that will never come, thinking maybe Max was the lucky one.
|
|
null | false
| null |
Name some popular American beer brands.
|
Rogue, Widmer, Budweiser, Coors, Michelob, Blue Moon
|
|
null | false
| null |
Which is the well-known sport in India?
|
while India as a country has followers for many sports, Cricket has been the most popular game of all time. In recent times there has been a great following for Football however the popularity is still with Cricket
|
|
null | false
| null |
What were the 'Big Four' railway companies in the UK before nationalisation
|
The Big Four were London, Midland and Scotland (LMS), London and North Eastern Railway (LNER), Great Western Railway (GWR) and Southern Railway (SR)
|
|
null | false
| null |
Name some Hip-Hop duos.
|
Hip-Hop has seen many duos, apart from rappers who frequently collaborate there are some which form a duo under a separate name some of these are:
Run the Jewels (El-P and Killer Mike)
Mob Depp (Prodigy and Havoc)
Black Starr (Talib Kweli and Mos Def)
Kids See Ghosts (Kanye West and Kid Cudi)
Bad meets Evil (Eminem and Royce da 5'9")
OutKast (Andre 3000 and Big Boi)
|
|
null | false
| null |
List five items you would need to survive in the wilderness.
|
Water, food, Knife, First Aid Kit, Lighter or Matches,
|
|
null | false
| null |
What is palmitoylation
|
Palmitoylation is the covalent attachment of fatty acids, such as palmitic acid, to cysteine (S-palmitoylation) and less frequently to serine and threonine (O-palmitoylation) residues of proteins, which are typically membrane proteins. The precise function of palmitoylation depends on the particular protein being considered. Palmitoylation enhances the hydrophobicity of proteins and contributes to their membrane association. Palmitoylation also appears to play a significant role in subcellular trafficking of proteins between membrane compartments, as well as in modulating protein–protein interactions. In contrast to prenylation and myristoylation, palmitoylation is usually reversible (because the bond between palmitic acid and protein is often a thioester bond). The reverse reaction in mammalian cells is catalyzed by acyl-protein thioesterases (APTs) in the cytosol and palmitoyl protein thioesterases in lysosomes. Because palmitoylation is a dynamic, post-translational process, it is believed to be employed by the cell to alter the subcellular localization, protein–protein interactions, or binding capacities of a protein.
An example of a protein that undergoes palmitoylation is hemagglutinin, a membrane glycoprotein used by influenza to attach to host cell receptors. The palmitoylation cycles of a wide array of enzymes have been characterized in the past few years, including H-Ras, Gsα, the β2-adrenergic receptor, and endothelial nitric oxide synthase (eNOS). In signal transduction via G protein, palmitoylation of the α subunit, prenylation of the γ subunit, and myristoylation is involved in tethering the G protein to the inner surface of the plasma membrane so that the G protein can interact with its receptor.
|
|
null | false
| null |
In probability theory and statistics, the harmonic distribution is a continuous probability distribution. It was discovered by Étienne Halphen, who had become interested in the statistical modeling of natural events. His practical experience in data analysis motivated him to pioneer a new system of distributions that provided sufficient flexibility to fit a large variety of data sets. Halphen restricted his search to distributions whose parameters could be estimated using simple statistical approaches.
|
Who discovered the harmonic distribution?
|
Étienne Halphen
|
null | false
| 98
|
Lexical analysis, syntactic analysis, semantic analysis, disclosure analysis and pragmatic analysis are five main steps in natural language processing BIBREF0 , BIBREF1 . While morphology is a basic task in lexical analysis of English, word segmentation is considered a basic task in lexical analysis of Vietnamese and other East Asian languages processing. This task is to determine borders between words in a sentence. In other words, it is segmenting a list of tokens into a list of words such that words are meaningful.
Word segmentation is the primary step in prior to other natural language processing tasks i. e., term extraction and linguistic analysis (as shown in Figure 1). It identifies the basic meaningful units in input texts which will be processed in the next steps of several applications. For named entity recognization BIBREF2 , word segmentation chunks sentences in input documents into sequences of words before they are further classified in to named entity classes. For Vietnamese language, words and candidate terms can be extracted from Vietnamese copora (such as books, novels, news, and so on) by using a word segmentation tool. Conformed features and context of these words and terms are used to identify named entity tags, topic of documents, or function words. For linguistic analysis, several linguistic features from dictionaries can be used either to annotating POS tags or to identifying the answer sentences. Moreover, language models can be trained by using machine learning approaches and be used in tagging systems, like the named entity recognization system of Tran et al. BIBREF2 .
Many studies forcus on word segmentation for Asian languages, such as: Chinese, Japanese, Burmese (Myanmar) and Thai BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . Approaches for word segmentation task are variety, from lexicon-based to machine learning-based methods. Recently, machine learning-based methods are used widely to solve this issue, such as: Support Vector Machine or Conditional Random Fields BIBREF7 , BIBREF8 . In general, Chinese is a language which has the most studies on the word segmentation issue. However, there is a lack of survey of word segmentation studies on Asian languages and Vietnamese as well. This paper aims reviewing state-of-the-art word segmentation approaches and systems applying for Vietnamese. This study will be a foundation for studies on Vietnamese word segmentation and other following Vietnamese tasks as well, such as part-of-speech tagger, chunker, or parser systems.
There are several studies about the Vietnamese word segmentation task over the last decade. Dinh et al. started this task with Weighted Finite State Transducer (WFST) approach and Neural Network approach BIBREF9 . In addition, machine learning approaches are studied and widely applied to natural language processing and word segmentation as well. In fact, several studies used support vector machines (SVM) and conditional random fields (CRF) for the word segmentation task BIBREF7 , BIBREF8 . Based on annotated corpora and token-based features, studies used machine learning approaches to build word segmentation systems with accuracy about 94%-97%.
According to our observation, we found that is lacks of complete review approaches, datasets and toolkits which we recently used in Vietnamese word segmentation. A all sided review of word segmentation will help next studies on Vietnamese natural language processing tasks have an up-to-date guideline and choose the most suitable solution for the task. The remaining part of the paper is organized as follows. Section II discusses building corpus in Vietnamese, containing linguistic issues and the building progress. Section III briefly mentions methods to model sentences and text in machine learning systems. Next, learning models and approaches for labeling and segmenting sequence data will be presented in Section IV. Section V mainly addresses two existing toolkits, vnTokenizer and JVnSegmenter, for Vietnamese word segmentation. Several experiments based on mentioned approaches and toolkits are described in Section VI. Finally, conclusions and future works are given in Section VII.
This paper aims reviewing state-of-the-art word segmentation approaches and systems applying for Vietnamese.
|
What does the paper aim to do?
|
To review state-of-the-art word segmentation approaches and systems applying for Vietnamese language processing.
|
null | false
| null |
What is the capital of Maine?
|
Augusta is the capital of Maine
|
|
null | false
| null |
Pfizer Inc. (/ˈfaɪzər/ FY-zər) is an American multinational pharmaceutical and biotechnology corporation headquartered on 42nd Street in Manhattan, New York City. The company was established in 1849 in New York by two German entrepreneurs, Charles Pfizer (1824–1906) and his cousin Charles F. Erhart (1821–1891).
Pfizer develops and produces medicines and vaccines for immunology, oncology, cardiology, endocrinology, and neurology. The company's largest products by sales are the Pfizer–BioNTech COVID-19 vaccine ($37 billion in 2022 revenues), Nirmatrelvir/ritonavir ($18 billion in 2022 revenues), Apixaban ($6 billion in 2022 revenues), a pneumococcal conjugate vaccine ($6 billion in 2022 revenues), and Palbociclib ($5 billion in 2022 revenues). In 2022, 42% of the company's revenues came from the United States, 8% came from Japan, and 50% came from other countries.
Pfizer was a component of the Dow Jones Industrial Average stock market index from 2004 to August 2020. The company ranks 43rd on the Fortune 500 and 43rd on the Forbes Global 2000.
|
Using the following text, list the top 4 drugs by revenue for Pfizer in 2022?
|
1. Pfizer–BioNTech COVID-19 vaccine
2. Nirmatrelvir/ritonavir
3. Apixaban
4. Palbociclib
|
null | false
| 244
|
In this paper, we are exploring the historical significance of Croatian machine translation research group. The group was active in 1950s, and it was conducted by Bulcsu Laszlo, Croatian linguist, who was a pioneer in machine translation during the 1950s in Yugoslavia.
To put the research of the Croatian group in the right context, we have to explore the origin of the idea of machine translation. The idea of machine translation is an old one, and its origin is commonly connected with the work of Rene Descartes, i.e. to his idea of universal language, as described in his letter to Mersenne from 20.xi.1629 BIBREF0. Descartes describes universal language as a simplified version of the language which will serve as an “interlanguage” for translation. That is, if we want to translate from English to Croatian, we will firstly translate from English to an “interlanguage”, and then from the “interlanguage” to Croatian. As described later in this paper, this idea had been implemented in the machine translation process, firstly in the Indonesian-to-Russian machine translation system created by Andreev, Kulagina and Melchuk from the early 1960s.
In modern times, the idea of machine translation was put forth by the philosopher and logician Yehoshua Bar-Hillel (most notably in BIBREF1 and BIBREF2), whose papers were studied by the Croatian group. Perhaps the most important unrealized point of contact between machine translation and cybernetics happened in the winter of 1950/51. In that period, Bar-Hillel met Rudolf Carnap in Chicago, who introduced to him the (new) idea of cybernetics. Also, Carnap gave him the contact details of his former teaching assistant, Walter Pitts, who was at that moment with Norbert Wiener at MIT and who was supposed to introduce him to Wiener, but the meeting never took place BIBREF3. Nevertheless, Bar-Hillel was to stay at MIT where he, inspired by cybernetics, would go to organize the first machine translation conference in the world in 1952 BIBREF3.
The idea of machine translation was a tempting idea in the 1950s. The main military interest in machine translation as an intelligence gathering tool (translation of scientific papers, daily press, technical reports, and everything the intelligence services could get their hands on) was sparked by the Soviet advance in nuclear technology, and would later be compounded by the success of Vostok 1 (termed by the USA as a “strategic surprise”). In the nuclear age, being able to read and understand what the other side was working on was of crucial importance BIBREF4. Machine translation was quickly absorbed in the program of the Dartmouth Summer Research Project on Artificial Intelligence in 1956 (where Artificial Intelligence as a field was born), as one of the five core fields of artificial intelligence (later to be known as natural language processing). One other field was included here, the “nerve nets” as they were known back then, today commonly known as artificial neural networks. What is also essential for our discussion is that the earliest programming language for artificial intelligence, Lisp, was invented in 1958 by John McCarthy BIBREF5. But let us take a closer look at the history of machine translation. In the USA, the first major wave of government and military funding for machine translation came in 1954, and the period of abundancy lasted until 1964, when the National Research Council established the Automatic Language Processing Advisory Committee (ALPAC), which was to assess the results of the ten years of intense funding. The findings were very negative, and funding was almost gone BIBREF4, hence the ALPAC report became the catalyst for the first “AI Winter”.
One of the first recorded attempts of producing a machine translation system in the USSR was in 1954 BIBREF6, and the attempt was applauded by the Communist party of the Soviet Union, by the USSR Committee for Science and Technology and the USSR Academy of Sciences. The source does not specify how this first system worked, but it does delineate that the major figures of machine translation of the time were N. Andreev of the Leningrad State University, O. Kulagina and I. Melchuk of the Steklov Mathematical Institute. There is information on an Indonesian-to-Russian machine translation system by Andreev, Kulagina and Melchuk from the early 1960s, but it is reported that the system was ultimately a failure, in the same way early USA systems were. The system had statistical elements set forth by Andreev, but the bulk was logical and knowledge-heavy processing put forth by Kulagina and Melchuk. The idea was to have a logical intermediate language, under the working name “Interlingua”, which was the connector of both natural languages, and was used to model common-sense human knowledge. For more details, see BIBREF6.
In the USSR, there were four major approaches to machine translation in the late 1950s BIBREF7. The first one was the research at the Institute for Precise Mechanics and Computational Technology of the USSR Academy of Sciences. Their approach was mostly experimental and not much different from today's empirical methods. They evaluated the majority of algorithms known at the time algorithms over meticulously prepared datasets, whose main strength was data cleaning, and by 1959 they have built a German-Russian machine translation prototype. The second approach, as noted by Mulić BIBREF7, was championed by the team at the Steklov Mathematical Institute of the USSR Academy of Sciences led by A. A. Reformatsky. Their approach was mainly logical, and they extended the theoretical ideas of Bar-Hillel BIBREF2 to build three algorithms: French-Russian, English-Russian and Hungarian-Russian. The third and perhaps the most successful approach was the one by A. A. Lyapunov, O. S. Kulagina and R. L. Dobrushin. Their efforts resulted in the formation of the Mathematical Linguistics Seminar at the Faculty of Philology in Moscow in 1956 and in Leningrad in 1957. Their approach was mainly information-theoretic (but they also tried logic-based approaches BIBREF7), which was considered cybernetic at that time. This was the main role model for the Croatian efforts from 1957 onwards. The fourth, and perhaps most influential, was the approach at the Experimental Laboratory of the Leningrad University championed by N. D. Andreev BIBREF7. Here, the algorithms for Indonesian-Russian, Arabic-Russian, Hindu-Russian, Japanese-Russian, Burmese-Russian, Norwegian-Russian, English-Russian, Spanish-Russian and Turkish-Russian were being built. The main approach of Andreev's group was to use an intermediary language, which would capture the meanings BIBREF7. It was an approach similar to KL-ONE, which would be introduced in the West much later (in 1985) by Brachman and Schmolze BIBREF8. It is also interesting to note that the Andreev group had a profound influence on the Czechoslovakian machine translation program BIBREF9, which unfortunately suffered a similar fate as the Yugoslav one due to the lack of funding.
Andreev's approach was in a sense "external". The modelling would be statistical, but its purpose would not be to mimic the stochasticity of the human thought process, but rather to produce a working machine translation system. Kulagina and Melchuk disagreed with this approach as they thought that more of what is presently called "philosophical logic" was needed to model the human thought process at the symbolic level, and according to them, the formalization of the human thought process was a prerequisite for developing a machine translation system (cf. BIBREF6). We could speculate that sub-symbolic processing would have been acceptable too, since that approach is also rooted in philosophical logic as a way of formalizing human cognitive functions and is also "internal" in the same sense symbolic approaches are.
There were many other centers for research in machine translation: Gorkovsky University (Omsk), 1st Moscow Institute for Foreign Languages, Computing Centre of the Armenian SSR and at the Institute for Automatics and Telemechanics of the Georgian SSR BIBREF7. It is worthwhile to note that both the USA and the USSR had access to state-of-the-art computers, and the political support for the production of such systems meant that computers were made available to researchers in machine translation. However, the results were poor in the late 1950s, and a working system was yet to be shown. All work was therefore theoretical work implemented on a computer, which proved to be sub-optimal.
In modern times, the idea of machine translation was put forth by the philosopher and logician Yehoshua Bar-Hillel (most notably in [3] and [4]), whose papers were studied by the Croatian group.
|
Who put forth the idea of machine translation in modern times?
|
In modern times, the idea of machine translation was put forth by the philosopher and logician Yehoshua Bar-Hillel.
|
null | false
| null |
Mattim Birta is a town and Village Development Committee in Khotang District in the Sagarmatha Zone of eastern Nepal. At the time of the 1991 Nepal census it had a population of 5,021 persons residing in 899 individual households.
|
What's the population of Mattim Birta?
|
The population of Mattim Birta in Nepal is 5,021 people.
|
null | false
| null |
Tell me another joke
|
Well, if you didn't like the last joke, you won't like this either but here goes...
A pony with a cough is just a little hoarse
|
|
1909.02635
| false
| null |
One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics.
One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics.
|
What evidence do they present that the model attends to shallow context clues?
|
Using model gradients with respect to input features they presented that the most important model inputs are verbs associated with entities which shows that the model attends to shallow context clues
|
null | false
| null |
Building material is material used for construction. Many naturally occurring substances, such as clay, rocks, sand, wood, and even twigs and leaves, have been used to construct buildings. Apart from naturally occurring materials, many man-made products are in use, some more and some less synthetic. The manufacturing of building materials is an established industry in many countries and the use of these materials is typically segmented into specific specialty trades, such as carpentry, insulation, plumbing, and roofing work. They provide the make-up of habitats and structures including homes.
|
What are some of the natural materials used in construction?
|
Clay, rocks, sand and wood are some of the common natural materials used in construction.
|
1908.10275
| false
| null |
How the music taste of the audience of popular music changed in the last century? The trend lines of the MUSIC model features, reported in figure FIGREF12, reveal that audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious. In other words, the audiences of popular music are getting more demanding as the quality and variety of the music products increases.
How the music taste of the audience of popular music changed in the last century? The trend lines of the MUSIC model features, reported in figure FIGREF12, reveal that audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious. In other words, the audiences of popular music are getting more demanding as the quality and variety of the music products increases.
|
What trends are found in musical preferences?
|
The answers are shown as follows:
* audiences wanted products more and more contemporary, intense and a little bit novel or sophisticated, but less and less mellow and (surprisingly) unpretentious
|
null | false
| 30
|
We used two parallel corpora: Asian Scientific Paper Excerpt Corpus (ASPEC) BIBREF0 and NTCIR PatentMT Parallel Corpus BIBREF1 . Regarding the training data of ASPEC, we used only the first 1 million sentences sorted by sentence-alignment similarity. Japanese sentences were segmented by the morphological analyzer MeCab (version 0.996, IPADIC), and English sentences were tokenized by tokenizer.perl of Moses. Table TABREF14 shows the numbers of the sentences in each corpus. Note that sentences with more than 40 words were excluded from the training data.
We used two parallel corpora: Asian Scientific Paper Excerpt Corpus (ASPEC) (Nakazawa et al., 2016) and NTCIR PatentMT Parallel Corpus (Goto et al., 2013).
|
Which two parallel corpora does the author use?
|
Asian Scientific Paper Excerpt Corpus (ASPEC) and NTCIR PatentMT Parallel Corpus.
|
null | false
| null |
How do airplanes fly?
|
Planes fly because of two forces - thrust and lift.
Thrust is generated by the engines and moves the plane forward.
When the wind passes the wings, there is a pressure differential created by the shape of the wings such that there is lift
|
|
null | false
| null |
How can you tell the difference between a dromedary and a camel?
|
The most notable difference between dromedaries and camels is their number of humps: a dromedary has one hump while camels have two humps.
|
|
null | false
| null |
List the three first books written by Dan Brown, in chronological order and bullet point, using the format {book title} ({year published})
|
- Digital Fortress (1998)
- Angels & Demons (2000)
- Deception Point (2001)
|
|
null | false
| null |
Categorize these movies that came out in 1999 as cartoons or live-action: The Matrix, Stuart Little, Cruel Intentions, The Iron Giant, The Blair Witch Project, South Park: Bigger, Longer, and Uncut, The Sixth Sense
|
Cartoons: Stuart Little, The Iron Giant, South Park: Bigger, Longer, and Uncut
Live action: The Matrix, Cruel Intentions, The Blair Witch Project, The Sixth Sense
|
|
1907.00758
| true
| null |
Given a pair of ultrasound and audio segments we can calculate the distance between them using our model. To predict the synchronisation offset for an utterance, we consider a discretised set of candidate offsets, calculate the average distance for each across utterance segments, and select the one with the minimum average distance. The candidate set is independent of the model, and is chosen based on task knowledge (Section SECREF5 ).
Given a pair of ultrasound and audio segments we can calculate the distance between them using our model. To predict the synchronisation offset for an utterance, we consider a discretised set of candidate offsets, calculate the average distance for each across utterance segments, and select the one with the minimum average distance.
|
Does their neural network predict a single offset in a recording?
|
Yes.
|
null | false
| null |
The participation of Thailand in the ABU TV Song Festival has occurred twice since the inaugural ABU TV Song Festival began in 2012. Since their début in 2013, the Thai entry has been organised by the national broadcaster Thai Public Broadcasting Service (TPBS). In 2015, Thailand withdrew from the festival.
|
Has Thailand been part in the ABU TV Song Festival
|
Indeed, Thailand has been participating in the ABU TV Song Festival. It has occurred twice since the inaugural ABU TV Song Festival began in 2012. Since their début in 2013, the Thai entry has been organised by the national broadcaster Thai Public Broadcasting Service (TPBS). In 2015, Thailand withdrew from the festival.
|
null | false
| 18
|
Training Data. We choose WIT3's TED corpus BIBREF12 as the basis of our experiments since it might be the only high-quality parallel data of many low-resourced language pairs. TED is also multilingual in a sense that it includes numbers of talks which are commonly translated into many languages. In addition, we use a much larger corpus provided freely by WMT organizers when we evaluate the impact of our approach in a real machine translation campaign. It includes the paralled corpus extracted from the digital corpus of European Parliament (EPPS), the News Commentary (NC) and the web-crawled parallel data (CommonCrawl). While the number of sentences in popular TED corpora varies from 13 thousands to 17 thousands, the total number of sentences in those larger corpus is approximately 3 million sentences.
Neural Machine Translation Setup. All experiments have been conducted using NMT framework Nematus, Following the work of Sennrich2016a, subword segmentation is handled in the prepocessing phase using Byte-Pair Encoding (BPE). Excepts stated clearly in some experiments, we set the number of BPE merging operations at 39500 on the joint of source and target data. When training all NMT systems, we take out the sentence pairs exceeding 50-word length and shuffle them inside every minibatch. Our short-list vocabularies contain 40,000 most frequent words while the others are considered as rare words and applied the subword translation. We use an 1024-cell GRU layer and 1000-dimensional embeddings with dropout at every layer with the probability of 0.2 in the embedding and hidden layers and 0.1 in the input and ourput layers. We trained our systems using gradient descent optimization with Adadelta BIBREF13 on minibatches of size 80 and the gradient is rescaled whenever its norm exceed 1.0. All the trainings last approximately seven days if the early-stopping condition could not be reached. At a certain time, an external evaluation script on BLEU BIBREF14 is conducted on a development set to decide the early-stopping condition. This evaluation script has also being used to choose the model archiving the best BLEU on the development set instead of the maximal loglikelihood between the translations and target sentences while training. In translation, the framework produces INLINEFORM0 -best candidates and we then use a beam search with the beam size of 12 to get the best translation.
We choose WIT3’s TED corpus (Cettolo et al., 2012) as the basis of our experiments since it might be the only high-quality parallel data of many low-resourced language pairs.TED is also multilingual in a sense that it includes numbers of talks which are commonly translated into many languages. In addition, we use a much larger corpus provided freely by WMT organizers5 when we evaluate the impact of our approach in a real machine translation campaign.It includes the paralled corpus extracted from the digital corpus of European Parliament (EPPS), the News Commentary (NC) and the web-crawled parallel data (CommonCrawl).
|
What corpus is used by them?
|
WIT3’s TED corpus and a corpus that includes the paralleled corpus extracted from the digital corpus of the European Parliament (EPPS), the News Commentary (NC) and the web-crawled parallel data (CommonCrawl) provided freely by WMT organizers.
|
1907.04380
| true
| null |
Natural Language Inference (NLI) is often used to gauge a model's ability to understand a relationship between two texts BIBREF0 , BIBREF1 . In NLI, a model is tasked with determining whether a hypothesis (a woman is sleeping) would likely be inferred from a premise (a woman is talking on the phone). The development of new large-scale datasets has led to a flurry of various neural network architectures for solving NLI. However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts BIBREF2 , BIBREF3 , BIBREF4 . For instance, in some datasets, negation words like “not” and “nobody” are often associated with a relationship of contradiction. As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.
However, recent work has found that many NLI datasets contain biases, or annotation artifacts, i.e., features present in hypotheses that enable models to perform surprisingly well using only the hypothesis, without learning the relationship between two texts BIBREF2 , BIBREF3 , BIBREF4 . For instance, in some datasets, negation words like “not” and “nobody” are often associated with a relationship of contradiction. As a ramification of such biases, models may not generalize well to other datasets that contain different or no such biases.
|
Is such bias caused by bad annotation?
|
No.
|
null | false
| null |
What high school did Washington Governer Jay Inslee attend?
|
Ingraham High School
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.