paper_id stringlengths 10 10 ⌀ | yes_no bool 2
classes | paper_index int64 0 519 ⌀ | evidence stringlengths 0 37.7k | question stringlengths 4 11.7k | answer stringlengths 1 26k |
|---|---|---|---|---|---|
null | false | 331 | UTF8gkai With the widespread of electronic health records (EHRs) in recent years, a large number of EHRs can be integrated and shared in different medical environments, which further support the clinical decision making and government health policy formulationBIBREF0. However, most of the information in current medical records is stored in natural language texts, which makes data mining algorithms unable to process these data directly. To extract relational entity triples from the text, researchers generally use entity and relation extraction algorithm, and rely on the central word to convert the triples into key-value pairs, which can be processed by conventional data mining algorithms directly. Fig. FIGREF1 shows an example of entity and relation extraction in the text of EHRs. The text contains three relational entity triples, i.e., $<$咳嗽, 程度等级, 反复$>$ ($<$cough, degree, repeated$>$), $<$咳痰, 程度等, 反复$>$ ($<$expectoration, degree, repeated$>$) and $<$发热, 存在情况, 无$>$ ($<$fever, presence, nonexistent$>$). By using the symptom as the central word, these triples can then be converted into three key-value pairs, i.e., $<$咳嗽的程度等级, 反复$>$ ($<$degree of cough, repeated$>$), $<$咳痰的程度等级, 反复$>$ ($<$degree of expectoration, repeated$>$) and $<$发热的存在情况, 无$>$ ($<$presence of fever, nonexistent$>$).
UTF8gkai To solve the task of entity and relation extraction, researchers usually follows pipeline processing and split the task into two sub-tasks, namely named entity recognition (NER)BIBREF1 and relation classification (RC)BIBREF2, respectively.
However, this pipeline method usually fails to capture joint features between entity and relationship types. For example, for a valid relation “存在情况(presence)” in Fig. FIGREF1, the types of its two relational entities must be “疾病(disease)”, “症状(symptom)” or “存在词(presence word)”. To capture these joint features, a large number of joint learning models have been proposed BIBREF3, BIBREF4, among which bidirectional long short term memory (Bi-LSTM) BIBREF5, BIBREF6 are commonly used as the shared parameter layer. However, compared with the language models that benefit from abundant knowledge from pre-training and strong feature extraction capability, Bi-LSTM model has relatively lower generalization performance.
To improve the performance, a simple solution is to incorporate language model into joint learning as a shared parameter layer. However, the existing models only introduce language models into the NER or RC task separately BIBREF7, BIBREF8. Therefore, the joint features between entity and relationship types still can not be captured. Meanwhile, BIBREF9 considered the joint features, but it also uses Bi-LSTM as the shared parameter layer, resulting the same problem as discussed previously.
Given the aforementioned challenges and current researches, we propose a focused attention model based on widely known BERT language model BIBREF10 to jointly for NER and RC tasks. Specifically, through the dynamic range attention mechanism, we construct task-specific MASK matrix to control the attention range of the last $K$ layers in BERT language model, leading to the model focusing on the words of the task. This process helps obtain the corresponding task-specific context-dependent representations. In this way, the modified BERT language model can be used as the shared parameter layer in joint learning NER and RC task. We call the modified BERT language model shared task representation encoder (STR-encoder) in the following paper.
To sum up, the main contributions of our work are summarized as follows:
We propose a focused attention model to jointly learn NER and RC task. The model integrates BERT language model as a shared parameter layer to achieve better generalization performance.
In the proposed model, we incorporate a novel structure, called STR-encoder, which changes the attention range of the last $K$ layers in BERT language model to obtain task-specific context-dependent representations. It can make full use of the original structure of BERT to produce the vector of the task, and can directly use the prior knowledge contained in the pre-trained language model.
For RC task, we proposed two different MASK matrices to extract the required feature representation of RC task. The performances of these two matrices are analyzed and compared in the experiment.
The rest of the paper is organized as follows. We briefly review the related work on NER, RC and joint entity and relation extraction in Section SECREF2. In Section SECREF3, we present the proposed focused attention model. We report the experimental results in Section SECREF4. Section SECREF5 is dedicated to studying several key factors that affect the performance of our model. Finally, conclusion and future work are given in Section SECREF6.
In the proposed model, we incorporate a novel structure, called STR-encoder, which changes the attention range of the last $K$ layers in BERT language model to obtain task-specific context-dependent representations. It can make full use of the original structure of BERT to produce the vector of the task, and can directly use the prior knowledge contained in the pre-trained language model. | What novel structure do they propose? | STR-encoder, which changes the attention range of the last $K$ layers in BERT language model to obtain task-specific context-dependent representations. |
null | false | null | Tell me which of the following are NBA teams: Knicks, Thunder, Kings, Lizards, Sparrows, Hawks, Warriors, Gladiators. | Knicks, Thunder, Kings, Hawks, Warriors. | |
null | false | 471 | Recent advances in self-supervised learning (SSL) demonstrate competitive or even better transfer learning performance on various downstream tasks, compared with supervised pre-training. Although waiving the cost of human labeling, SSL usually requires massive unlabeled data to learn a powerful representation model and benefits from significantly large-scale pre-training data, e.g., adopted billion-scale data to pre-train better SSL models. The common pre-training practice follows the joint training (JT) setup, where massive unlabeled data are collected together before model training. In reality, however, it is usually difficult to access a large amount of collective unlabeled data at once. Instead, real-world data are usually accessed in a streaming fashion, e.g., data are generated and collected sequentially chunk by chunk, or even decentralized and stored in different servers; such a learning setup is known as sequential training (ST). Despite much research effort and promising results achieved by JT, it inevitably suffers from heavy data storage, prolonged training time, and finds itself incompetent when training data volume expands over time. For ST, on the other hand, a learner can be sequentially trained with disjoint data chunks, making it much more efficient than JT.
How to effectively and efficiently pre-train a representation model under the ST setup has been an open problem. Despite the high efficiency, some continual learning research works have shown that ST with supervised models tends to suffer from catastrophic forgetting, having a significant performance degradation on the historical data chunks. Unlike the case of continual learning tasks, in pre-training tasks, one expects the model to well generalize to downstream tasks rather than focusing only on the seen tasks. Nevertheless, how well sequential self-supervised models perform on downstream tasks remains unclear. to measure the semantic similarity of classes. Classes having the same parent or ancestor in WordNet, marked with similar colors, share similar semantics in the class semantic tree.
To fill the research gap, we provide a thorough empirical study on self-supervised pre-training with streaming data. In the pre-training stage, to mimic real-world data collection scenarios and for the better dissection of sequential SSL, we consider streaming data with different degrees of distribution shifts. As shown in Figure, we obtain four types of streaming data, including the instance incremental sequence with negligible data distribution shifts, by randomly splitting ImageNet-1K into four independent and identically distributed (IID) data chunks, the random class incremental sequence with moderate data distribution shifts, by randomly splitting 1K classes of images into four disjoint chunks each with 250 classes, the distant class incremental sequence with severe data distribution shifts, by splitting 1K classes of data into four chunks while maximizing the semantical dissimilarity among chunks, and the domain incremental sequence with severe domain distribution shifts, by taking each domain in DomainNet as a data chunk.
As for the evaluation, we consider three downstream tasks following, including few-shot evaluation and linear evaluation (also named many-shot classification) on 12 image classification datasets, and the Pascal VOC detection task. Through extensive experiments with more than 500 pre-trained models, we thoroughly investigate key roles in sequential SSL, including streaming data, downstream tasks and datasets, continual learning methods, SSL methods, and the method efficiency in terms of time and storage. We also thoroughly investigate the knowledge forgetting behavior of sequential SSL and supervised learning (SL) models and provide a comprehensive empirical analysis of the underlying reason.
To the best of our knowledge, we are among the first to explore the sequential self-supervised pretraining setting and the first to provide a thorough empirical study on self-supervised pre-training with streaming data. We summarize the takeaways as well as our contributions as: i). Sequential SSL models exhibit the on-par transfer learning performance as joint SSL models on streaming data with negligible or mild distribution shifts. As for streaming data with severe distribution shifts or longer sequences, i.e., the distant class incremental sequence, evident performance gaps exist between sequential SSL and joint SSL models. Such performance gaps, however, can be mitigated effectively and efficiently with unsupervised parameter regularization and simple data replay. ii). Based on the above finding, we conclude that the standard joint training paradigm may be unnecessary for SSL pre-training. Instead, sequential SSL is performance-competitive but more time-efficient and storage-saving and is well worth considering as the practical practice for selfsupervised pre-training with streaming data. iii). Compared with supervised learning (SL) models, SSL models consistently show smaller performance gaps between ST and JT. Our comprehensive investigation of learned representations demonstrates that sequential SSL models are less prone to catastrophic forgetting than SL models. iv). Through the empirical analysis on the sharpness of minima in the loss landscape, we find that SSL models have wider minima than SL models, which we hypothesize is the reason for less forgetting of SSL models.
In the pre-training stage, to mimic real-world data collection scenarios and for the better dissection of sequential SSL, we consider streaming data with different degrees of distribution shifts. As shown in Figure 1, we obtain four types of streaming data, including the instance incremental sequence with negligible data distribution shifts, by randomly splitting ImageNet-1K (Russakovsky et al., 2015) into four independent and identically distributed (IID) data chunks, the random class incremental sequence with moderate data distribution shifts, by randomly splitting 1K classes of images into four disjoint chunks each with 250 classes, the distant class incremental sequence with severe data distribution shifts, by splitting 1K classes of data into four chunks while maximizing the semantical dissimilarity among chunks, and the domain incremental sequence with severe domain distribution shifts, by taking each domain in DomainNet (Peng et al., 2019) as a data chunk. | Please elaborate on the rationale behind the design of the four scenarios, as early as in the introduction. In other words, why these four categories but nothing in-between or beyond? | We have added more introduction to the rationale behind the proposed streaming data in the introduction. We would like to clarify that we have two main considerations behind the proposed four types of streaming data.
The main consideration is to mimic common practical data collection scenarios where streaming data may include different levels of data distribution shift. To this end, we proposed four types of streaming data targeting for four degrees of commonly-seen distribution shift in realistic scenarios.
The second consideration is for the ease of a comprehensive empirical study on pre-training with streaming data. For the first time to provide such a systematic study, we believe using streaming data with various degrees of distribution shifts is beneficial for the community to have a comprehensive understanding of the sequential pre-training problem. Streaming data with other complex distribution shifts may be feasible, e.g., mixing different distribution shifts within one streaming data sequence may be more realistic. We call for more attention on pre-training with streaming data and further extending the types of considered data sequence wherein. But for our work, a study on streaming data containing mixed distribution shifts is not easy to probe into the properties of the studied problem. Therefore, we adopt the design of the four scenarios in our paper. |
null | false | null | Give me a list of the best breakfast foods I've never tried | French toast, pancakes, avocado toast, corned beef hash, eggs Benedict, grilled blueberry muffin | |
null | false | null | where is khargone? | Khargone is a town in Madhya Pradesh India, situated near river Kunda. It's on the southwest corner of the state. it borders with Maharastra state to the south. Khargone is also a district with the same name. The weather in Khargone is very hot in summers with temperatures reaching above 40 degree celcius. The city has a famous temple know as "Nav Grah Temple" which means nine planets. It also has a fort. | |
1909.01013 | false | null | We train $\mathcal {F}$ and $\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\mathcal {G}(\mathcal {F}(X))$ is similar to $X$ and $\mathcal {F}(\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\ell _{cycle}$) on each side to avoid $\mathcal {F}$ and $\mathcal {G}$ from contradicting each other. The overall architecture of our model is illustrated in Figure FIGREF4.
We train $\mathcal {F}$ and $\mathcal {G}$ jointly and introduce two regularizers. Formally, we hope that $\mathcal {G}(\mathcal {F}(X))$ is similar to $X$ and $\mathcal {F}(\mathcal {G}(Y))$ is similar to $Y$. We implement this constraint as a cycle consistency loss. As a result, the proposed model has two learning objectives: i) an adversarial loss ($\ell _{adv}$) for each model as in the baseline. ii) a cycle consistency loss ($\ell _{cycle}$) on each side to avoid $\mathcal {F}$ and $\mathcal {G}$ from contradicting each other. | What regularizers were used to encourage consistency in back translation cycles? | The answers are shown as follows:
* an adversarial loss ($\ell _{adv}$) for each model as in the baseline
* a cycle consistency loss ($\ell _{cycle}$) on each side
|
null | false | null | Elie Wiesel (/ˈɛli viːˈzɛl/, born Eliezer Wiesel, Yiddish: אליעזר װיזעל Eliezer Vizel; September 30, 1928 – July 2, 2016) was a Romanian-born American writer, professor, political activist, Nobel laureate, and Holocaust survivor. | Extract the occupations of Elie Wiesel from the text. | Elie Weisel's listed professions were: writer, professor, and political activist. |
null | false | null | Create content for an AirBnB guest pamphlet for a house that is on the beach in Florida. It should include the property address, host contact information, wi-fi network and password, check-in/out instructions, practical information about the property (e.g., garbage & recycling days), and the best restaurants, bars and activities near the house. | About The Property: Welcome to Sunset Beach on Treasure Island, we’re so happy to host you! Steps from the beautiful beach, we hope that you will enjoy your time here. Make yourselves comfortable!
Address: 8085 W Gulf Blvd, Treasure Island, FL 33706
Host Contact Information:
- Name: Jane Carter
- Email: Jane.Carter8085@gmail.com
- Phone: 727-326-0913
WiFi:
- Network: SaltLife8085
- Password: SandyBeachesPlz!
Check-In:
Check-in is at 3:00pm EST on the day of your arrival. It is a keyless entry to the house, the code is 30456. The following are instructions to unlock and lock the door:
- To Unlock:
- From the outside, press the Home logo and enter 30456.
- From the inside, turn the thumbturn.
To Lock:
- From the outside, press the lock logo.
- From the inside, turn the thumbturn.
If needed, there is a front door key available. It is the purple coded key hanging on the key ring in the entry.
Check-out:
Check out is at 11:00am EST on the day of your departure. The following is a checklist for departure:
- Turn off all lights
- Close all windows
- Place any keys on the key ring at the entry & lock the front door
PRACTICAL INFORMATION
House Rules
- No smoking is allowed.
- No pets.
Garbage & Recycling
The garbage & recycling bins are located on the west side of the house (side of the garage). On collection days, bring bins to the curb.
- Garbage is collected on Tuesdays and Fridays.
- Recycling is collected on Fridays.
Kitchen items
- To start the dishwasher, click “Normal Wash”, then click “Start” and shut the dishwasher door. Please note that the dishwasher detergent door does not always fully close, just place the tablet in the appropriate spot and start the cycle.
- The beige large pan and pot are the “Always Pan” and must be hand washed.
- The refrigerator has filtered water, inside the left hand door there is a small button/spout.
- There are additional kitchen appliances and bakeware in the cabinet by the dining table and the closet by the second set of stairs.
- Under the powder room bathroom sink there are additional paper towels and cleaning products.
Beds & Linens/Towels
- The guest bedroom daybed turns into a king sized bed. The pillow topper and bedding is located in the guest closet closet.
- The sofa in the living room pulls out to a queen bed. A memory foam, linens and pillows are located on the top shelf of the laundry closet upstairs.
- There are extra linens and towels in the gray baskets in the laundry closets.
Beach Details
- Beach chairs, tents, toys, floats, a buggy, etc. are available in the carport closet (double doors). Although we keep this closet unlocked, a closet key (coded blue) is available on the key ring by the front door if you wish to lock.
- Two paddle boards available for your use. One is already inflated (located on the back side of the house). The other is in the black bag of the beach closet (pump included).
- Beach towels are available in the blue console table on the main floor landing. Additional beach towels are upstairs in a gray basket inside the laundry closet.
- When returning from the beach, there is an outdoor shower on the west side of the house (along the garage), a hose on the left side of the front door, and a shoe brush on the right side of the front door at your disposal. Please be sure to turn the water off after you’re done using the shower or the hose.
RESTAURANTS, BARS, AND ACTIVITIES
Places to Eat and Drink
Coffee
- Dunkin’ Donuts (~5 minute drive)
- Grove Surf & Coffee Shop (~5 minute drive) - Boutique coffee shop with swimwear & clothing store attached
- The Blend Coffee & Cocktails (<15 minute drive) - St. Pete coffee chain with interesting flavors and blends
Restaurants
- Sea Dog Brewing Company (<5 minute drive) - American
- Shrimpy’s Waterfront (<5 minute drive) - Cajun / Creole dishes & local seafood
- Buoy’s Waterfront Bar & Grill (~5 minute drive) - Laid-back waterside eatery
- Vito & Michael’s Pizzeria (~5 minute drive) - Italian
- Captain Bill’s Oyster Bar (<10 minute drive) - Oysters & seafood
- VIPs (<10 minute drive) - Mexican
- Ricky T’s Bar & Grille (<10 minute drive) - Relaxed American food with live music
- Pia’s (<20 minute drive) - Authentic family Italian (Highly recommended! Make a reservation ahead of time.)
Bars
- Caddy’s (15 minute walk) - Day time beach bar, live music at night
- Ka’Tiki (15 minute walk) - Beach bar with live music every night
- Toasted Monkey (~5 minute drive) - Waterfront with live music
- Riptides (<5 minute drive) - Sports bar
- Mastry’s Brewing Co (<5 minute drive) - Brewery with Thursday night trivia
Activities
A great local resource for activities and restaurants is the blog “I love the Burg”
[link: https://ilovetheburg.com/]. Along with regular content featuring local business, every Friday they highlight “Top Events this Weekend in St. Pete”. There are the markets, events, restaurant openings, etc. that are happening in the St Pete area.
- Walk the Sunset Beach boardwalk - just around the corner enjoy a 1 mile stroll.
- Fun indoors - help yourself to all the books and games in the blue bookcase
- Rent a boat - enjoy Sunset Beach from the gulf. Treasure Island Boat Rentals (~5 minute drive) has different vessels available for ½ day or full day.
- Visit Gulfport (~15 minute drive) - quaint artsy town nearby with restaurants/bars, shops, playground and a waterfront.
- Explore DTSP - there are several museums, restaurants, and walking paths within a 30 minute drive.
- St Pete Pier - along the water in DTSP with restaurants, shops, and a playground.
- Sunken Gardens - botanical gardens covering 4 acres of land.
- Great Explorations Children’s Museum
- Salvador Dali Museum
- Visit a farmers market
- St Petersburg Market (25 min drive) - every Saturday 9am - 2pm
- Corey Ave (~5 min drive) - every Sunday 10am - 2pm
- Gulfport Market (~15 minute drive) - every Tuesday 9am-3pm | |
null | false | null | Through this period, YouTube tried several new ways to generate revenue beyond advertisements. In 2013, YouTube launched a pilot program for content providers to offer premium, subscription-based channels. This effort was discontinued in January 2018 and relaunched in June, with US$4.99 channel subscriptions. These channel subscriptions complemented the existing Super Chat ability, launched in 2017, which allows viewers to donate between $1 and $500 to have their comment highlighted. In 2014, YouTube announced a subscription service known as "Music Key," which bundled ad-free streaming of music content on YouTube with the existing Google Play Music service. The service continued to evolve in 2015 when YouTube announced YouTube Red, a new premium service that would offer ad-free access to all content on the platform (succeeding the Music Key service released the previous year), premium original series, and films produced by YouTube personalities, as well as background playback of content on mobile devices. YouTube also released YouTube Music, a third app oriented towards streaming and discovering the music content hosted on the YouTube platform.
The company also attempted to create products appealing to specific viewers. YouTube released a mobile app known as YouTube Kids in 2015, designed to provide an experience optimized for children. It features a simplified user interface, curated selections of channels featuring age-appropriate content, and parental control features. Also in 2015, YouTube launched YouTube Gaming—a video gaming-oriented vertical and app for videos and live streaming, intended to compete with the Amazon.com-owned Twitch.
The company was attacked on April 3, 2018, when a shooting occurred at YouTube's headquarters in San Bruno, California, which wounded four and resulted in one death (the shooter). | Why might YouTube employees be sensitive about guns? | In April 3, 2018, a shooting occurred at YouTube's headquarters in San Bruno, California, which wounded four and resulted in one death (the shooter). Also, depending on the political affiliation of the employees, they may be sensitive about topics regarding gun control and safety. |
null | false | null | Who is best Alexa or Siri | Alexa has more power as it can be connected most of the 3rd party devices where Siri has it's own eco system to work | |
null | false | null | Among rock musicians of the late 20th century, tell me which of the following are singers or drummers: Bon Scott, Tommy Lee, Sebastian Bach, Robert Mason, Phil Rudd, Alex Van Halen, Mick Brown, Klaus Meine, David Lee Roth, Rob Halford | Bon Scott - Singer
Tommy Lee - Drummer
Sebastian Bach - Singer
Robert Mason - Singer
Phil Rudd - Drummer
Alex Van Halen - Drummer
Mick Brown - Drummer
Klaus Meine - Singer
David Lee Roth - Singer
Rob Halford - Singer | |
null | false | null | Are there any movies about Atlantis? | There are movies about Atlantis:
- Atlantis: End of a World, Birth of a Legend 2011
- Atlantis 2013–2015 TV series | |
null | false | null | He has been a leading film critic in the country for over a quarter of a century. | Change this to future tense | He will be a leading film critic in the country for over a quarter of a century. |
null | false | 309 | In order to make a detailed analysis about Zhihu Live with data-driven approach, the first step is to collect Zhihu Live data. Since there is no public dataset available for research and no official APIs, we develop a web spider with python requests library to crawl data from Zhihu Live official website. Our crawling strategy is breadth-first traverse (we crawl the records one by one from the given URLs, and then extract more detailed information from sub URLs). We follow the crawler-etiquette defined in Zhihu's robots.txt. So we randomly set 2 to 5 seconds pause after per crawling to prevent from being banned by Zhihu, and avoid generating abnormal traffic as well. Our spider crawls 7242 records in total. Majority of the data are embedded in Ajax calls. In addition, we also crawl 286,938 comments of these Zhihu Lives. All of the datasets are stored in MongoDB, a widely-used NoSQL database.
All of the datasets are stored in MongoDB, a widely-used NoSQL database. | What database do they use to store their datasets? | MongoDB |
1705.07368 | false | null | Word embeddings have risen in popularity for NLP applications due to the success of models designed specifically for the big data setting. In particular, BIBREF0 , BIBREF1 showed that very simple word embedding models with high-dimensional representations can scale up to massive datasets, allowing them to outperform more sophisticated neural network language models which can process fewer documents. In this work, I offer a somewhat contrarian perspective to the currently prevailing trend of big data optimism, as exemplified by the work of BIBREF0 , BIBREF1 , BIBREF3 , and others, who argue that massive datasets are sufficient to allow language models to automatically resolve many challenging NLP tasks. Note that “big” datasets are not always available, particularly in computational social science NLP applications, where the data of interest are often not obtained from large scale sources such as the internet and social media, but from sources such as press releases BIBREF11 , academic journals BIBREF10 , books BIBREF12 , and transcripts of recorded speech BIBREF13 , BIBREF14 , BIBREF15 . A standard practice in the literature is to train word embedding models on a generic large corpus such as Wikipedia, and use the embeddings for NLP tasks on the target dataset, cf. BIBREF3 , BIBREF0 , BIBREF16 , BIBREF17 . However, as we shall see here, this standard practice might not always be effective, as the size of a dataset does not correspond to its degree of relevance for a particular analysis. Even very large corpora have idiosyncrasies that can make their embeddings invalid for other domains. For instance, suppose we would like to use word embeddings to analyze scientific articles on machine learning. In Table TABREF1 , I report the most similar words to the word “learning” based on word embedding models trained on two corpora. For embeddings trained on articles from the NIPS conference, the most similar words are related to machine learning, as desired, while for embeddings trained on the massive, generic Google News corpus, the most similar words relate to learning and teaching in the classroom. Evidently, domain-specific data can be important.
I have proposed a model-based method for training interpretable corpus-specific word embeddings for computational social science, using mixed membership representations, Metropolis-Hastings-Walker sampling, and NCE. Experimental results for prediction, supervised learning, and case studies on state of the Union addresses and NIPS articles, indicate that high-quality embeddings and topics can be obtained using the method. The results highlight the fact that big data is not always best, as domain-specific data can be very valuable, even when it is small. I plan to use this approach for substantive social science applications, and to address algorithmic bias and fairness issues.
Word embeddings have risen in popularity for NLP applications due to the success of models designed specifically for the big data setting.
Note that “big” datasets are not always available, particularly in computational social science NLP applications, where the data of interest are often not obtained from large scale sources such as the internet and social media, but from sources such as press releases BIBREF11 , academic journals BIBREF10 , books BIBREF12 , and transcripts of recorded speech BIBREF13 , BIBREF14 , BIBREF15
have proposed a model-based method for training interpretable corpus-specific word embeddings for computational social science, using mixed membership representations, Metropolis-Hastings-Walker sampling, and NCE. Experimental results for prediction, supervised learning, and case studies on state of the Union addresses and NIPS articles, indicate that high-quality embeddings and topics can be obtained using the method. The results highlight the fact that big data is not always best, as domain-specific data can be very valuable, even when it is small. | Why is big data not appropriate for this task? | Training embeddings from small-corpora can increase the performance of some tasks |
null | false | null | Give me a list of popular Netflix shows I can watch now. | There are currently a number of Netflix shows available that may be of interest to you depending on your favorite genres. Some popular ones include You, Ozark, Love is Blind, and Wednesday. | |
null | false | null | Who is the Cigarette Smoking Man? | The cigarette smoking man is a fictional character from the TV show "The X-Files." His name was revealed to be C.G.B. Spender in later seasons of the show. He was portrayed by William B. Davis. | |
null | false | null | Who is the founder of the Communist Party? | Lenin | |
1608.04917 | true | null | In the area of Economic and monetary system we see a strong cooperation between EPP and S&D (Fig FIGREF42 A), which is on a par with the cohesion of the most cohesive groups (GUE-NGL, S&D, Greens-EFA, ALDE, and EPP), and is above the cohesion of the other groups. As pointed out in the section “sec:coalitionpolicy”, there is a strong separation in two blocks between supporters and opponents of European integration, which is even more clearly observed in Fig FIGREF42 B. On one hand, we observe cooperation between S&D, ALDE, EPP, and ECR, and on the other, cooperation between GUE-NGL, Greens-EFA, EFDD, ENL, and NI. This division in blocks is seen again in Fig FIGREF42 C, which shows the strongest disagreements. Here, we observe two blocks composed of S&D, EPP, and ALDE on one hand, and GUE-NGL, EFDD, ENL, and NI on the other, which are in strong opposition to each other.
In the area of State and Evolution of the Union we again observe a strong division in two blocks (see Fig FIGREF42 E). This is different to the Economic and monetary system, however, where we observe a far-left and far-right cooperation, where the division is along the traditional left-right axis.
The patterns of coalitions forming on Twitter closely resemble those in the European Parliament. In Fig FIGREF42 J we see that the strongest degrees of cooperation on Twitter are within the groups. The only group with low cohesion is the NI, whose members have only seven retweets between them. The coalitions on Twitter follow the seating order in the European Parliament remarkably well (see Fig FIGREF42 K). What is striking is that the same blocks form on the left, center, and on the center-right, both in the European Parliament and on Twitter. The largest difference between the coalitions in the European Parliament and on Twitter is on the far-right, where we observe ENL and NI as isolated blocks.
As pointed out in the section “sec:coalitionpolicy”, there is a strong separation in two blocks between supporters and opponents of European integration, which is even more clearly observed in Fig FIGREF42 B.
In the area of State and Evolution of the Union we again observe a strong division in two blocks (see Fig FIGREF42 E). This is different to the Economic and monetary system, however, where we observe a far-left and far-right cooperation, where the division is along the traditional left-right axis.
The patterns of coalitions forming on Twitter closely resemble those in the European Parliament. | Does the analysis find that coalitions are formed in the same way for different policy areas? | No. |
null | false | null | Explain how a mechanical watch works | A simple mechanical watch works by transferring power from the main spring, a tightly coiled metal spring, through an escapement and balance wheel, to wheels representing seconds, minutes, and hours.
The watch uses a gearing from the main spring that pushes an escapement gearing against a ruby palette to oscillate a balance wheel. The balance wheel and escapement are carefully constructed to oscillate at a specific frequency, releasing small bursts of power to the gear train. The train is constructed to gear the specific frequency of the balance wheel into seconds, minutes and hours.
Further complications like chronographs, sun phase, alarms, and others are not accounted for in this simple watch. An automatic watch contains a weighted rotor in order to wind the mainspring with the motion of a person's arm. | |
null | false | null | What else can you do with a hammer other than hammering nails? | You can use a hammer to
1. Remove nails for those with a claw
2. Defense yourself as a weapon
3. Juggle between your left and right hands to practice hand-eye coordination
4. Substitute a paper weight if the hammer is not too big | |
null | false | null | Identify which instrument is string or percussion: Kundu, Kabosy | Kabosy is string, Kundu is percussion. | |
null | false | null | D'Artagnan hires a servant named Planchet, finds lodgings and reports to Monsieur des Essart, whose company is a less prestigious regiment in which he will have to serve for two years before being considered for the musketeers. Shortly after, his landlord speaks to him about the kidnapping of his wife, Constance Bonacieux. When she is presently released, D'Artagnan falls in love at first sight with her. She works for Queen Anne of France, who is secretly having an affair with the English duke of Buckingham. The king, Louis XIII, gave the queen a gift of diamond studs, but she gives them to her lover as a keepsake. | What is the name of the woman D'Artagnan falls in love with ? | Constance Bonacieux |
1811.02076 | true | null | We evaluate a multi-task approach and three algorithms that explicitly model the task dependencies. We perform experiments on document-level variants of the SQuAD dataset BIBREF1 . The contributions for our papers are:
We evaluate a multi-task approach and three algorithms that explicitly model the task dependencies. | Will these findings be robust through different datasets and different question answering algorithms? | Yes. |
null | false | null | Why is the SEC the best football conference in America? | The SEC is the best conference for several reasons. The National champion has come from the SEC 13 times since 2006. The fan bases are unmatched in their love for football and the honest truth is that football is a way of life in the SEC. | |
null | false | null | What is Brdinje | Brdinje (pronounced [bəɾˈdiːnjɛ]) is a dispersed settlement in the hills southeast of Ravne na Koroškem in the Carinthia region in northern Slovenia | |
null | false | null | Give me a list of the best food to try in Singapore | Laksa
Hainanese Chicken Rice
Chilli Crab
Bak Kut Teh
Hokkien Prawn Mee
Satay
Nasi Padang
Kaya Toast
Durian | |
null | false | null | John Moses Browning (January 23, 1855 – November 26, 1926) was an American firearm designer who developed many varieties of military and civilian firearms, cartridges, and gun mechanisms – many of which are still in use around the world. He made his first firearm at age 13 in his father's gun shop and was awarded the first of his 128 firearm patents on October 7, 1879, at the age of 24. He is regarded as one of the most successful firearms designers of the 19th and 20th centuries and pioneered the development of modern repeating, semi-automatic, and automatic firearms.
Browning influenced nearly all categories of firearms design, especially the autoloading of ammunition. He invented, or made significant improvements to, single-shot, lever-action, and pump-action rifles and shotguns. He developed the first reliable and compact autoloading pistols by inventing the telescoping bolt, then integrating the bolt and barrel shroud into what is known as the pistol slide. Browning's telescoping bolt design is now found on nearly every modern semi-automatic pistol, as well as several modern fully automatic weapons. He also developed the first gas-operated firearm, the Colt–Browning Model 1895 machine gun – a system that surpassed mechanical recoil operation to become the standard for most high-power self-loading firearm designs worldwide. He also made significant contributions to automatic cannon development.
Browning's most successful designs include the M1911 pistol, the water-cooled M1917, the air-cooled M1919, and heavy M2 machine guns, the M1918 Browning Automatic Rifle, and the Browning Auto-5 – the first semi-automatic shotgun. Some of these arms are still manufactured, often with only minor changes in detail and cosmetics to those assembled by Browning or his licensees. The Browning-designed M1911 and Hi-Power are some of the most copied firearms in the world. | Who was John Moses Browning? | John Moses Browning is one of the most well-known designer of modern firearms. He started building firearms in his father's shop at the age of 13, and was awarded his first patent when he was 24.
He designed the first reliable automatic pistol, and the first gas-operated firearm, as well inventing or improving single-shot, lever-action, and pump-action rifles and shotguns.
Today, he is most well-known for the M1911 pistol, the Browning Automatic Rifle, and the Auto-5 shotgun, all of which are in still in current production in either their original design, or with minor changes. His M1911 and Hi-Power pistols designs are some of the most reproduced firearms in the world today. |
null | false | null | Which of these things are commonly included in a golf bag? Putter, Driver, Irons, Golf Tees, Frisbee, Shovel, Curling Iron, Golf Balls | Things that are commonly found in an everyday golfer's bag include items like a putter club for putting on the green, a driver club for teeing off to start the hole, golf irons for hitting the ball from the fairway, golf tees and replacement golf balls. A frisbee and shovel are often used outdoors but they are not included in a golf bag. A 'curling iron' is a hot tool used for curling hair; a curling iron is not used on a golf course. | |
null | false | null | disruptive innovation is innovation that creates a new market and value network or enters at the bottom of an existing market and eventually displaces established market-leading firms, products, and alliances.The concept was developed by the American academic Clayton Christensen and his collaborators and has been called the most influential business idea of the early 21st century.
Disruptive innovations tend to be produced by outsiders and entrepreneurs in startups, rather than existing market-leading companies. The business environment of market leaders does not allow them to pursue disruptive innovations when they first arise, because they are not profitable enough at first and because their development can take scarce resources away from sustaining innovations | Summarize disruptive innovation in two paragraphs | In business theory, disruptive innovation is innovation that creates a new market and value network or enters at the bottom of an existing market and eventually displaces established market-leading firms, products, and alliances. The concept was developed by the American academic Clayton Christensen and his collaborators beginning in 1995,[full citation needed] and has been called the most influential business idea of the early 21st century. Lingfei Wu, Dashun Wang, and James A. Evans generalized this term to identify disruptive science and technological advances from more than 65 million papers, patents and software products that span the period 1954–2014. Their work was featured as the cover of the February 2019 issue of Nature and was included among the Altmetric 100 most-discussed work in 2019.
Not all innovations are disruptive, even if they are revolutionary. For example, the first automobiles in the late 19th century were not a disruptive innovation, because early automobiles were expensive luxury items that did not disrupt the market for horse-drawn vehicles. The market for transportation essentially remained intact until the debut of the lower-priced Ford Model T in 1908. The mass-produced automobile was a disruptive innovation, because it changed the transportation market, whereas the first thirty years of automobiles did not.
Disruptive innovations tend to be produced by outsiders and entrepreneurs in startups, rather than existing market-leading companies. The business environment of market leaders does not allow them to pursue disruptive innovations when they first arise, because they are not profitable enough at first and because their development can take scarce resources away from sustaining innovations (which are needed to compete against current competition). Small teams are more likely to create disruptive innovations than large teams. A disruptive process can take longer to develop than by the conventional approach and the risk associated to it is higher than the other more incremental, architectural or evolutionary forms of innovations, but once it is deployed in the market, it achieves a much faster penetration and higher degree of impact on the established markets. |
1906.08382 | false | null | Performance figures are computed using tail prediction on the test sets: For each test triple $(h,r,t)$ with open-world head $h \notin E$ , we rank all known entities $t^{\prime } \in E$ by their score $\phi (h,r,t^{\prime })$ . We then evaluate the ranks of the target entities $t$ with the commonly used mean rank (MR), mean reciprocal rank (MRR), as well as Hits@1, Hits@3, and Hits@10.
We then evaluate the ranks of the target entities $t$ with the commonly used mean rank (MR), mean reciprocal rank (MRR), as well as Hits@1, Hits@3, and Hits@10. | what was the evaluation metrics studied in this work? | The answers are shown as follows:
* mean rank (MR), mean reciprocal rank (MRR), as well as Hits@1, Hits@3, and Hits@10
|
null | false | null | Morocco has a wide range of biodiversity. It is part of the Mediterranean basin, an area with exceptional concentrations of endemic species undergoing rapid rates of habitat loss, and is therefore considered to be a hotspot for conservation priority. Avifauna are notably variant. The avifauna of Morocco includes a total of 454 species, five of which have been introduced by humans, and 156 are rarely or accidentally seen.
The Barbary lion, hunted to extinction in the wild, was a subspecies native to Morocco and is a national emblem. The last Barbary lion in the wild was shot in the Atlas Mountains in 1922. The other two primary predators of northern Africa, the Atlas bear and Barbary leopard, are now extinct and critically endangered, respectively. Relict populations of the West African crocodile persisted in the Draa river until the 20th century.
The Barbary macaque, a primate endemic to Morocco and Algeria, is also facing extinction due to offtake for trade human interruption, urbanisation, wood and real estate expansion that diminish forested area – the macaque's habitat.
Trade of animals and plants for food, pets, medicinal purposes, souvenirs and photo props is common across Morocco, despite laws making much of it illegal. This trade is unregulated and causing unknown reductions of wild populations of native Moroccan wildlife. Because of the proximity of northern Morocco to Europe, species such as cacti, tortoises, mammal skins, and high-value birds (falcons and bustards) are harvested in various parts of the country and exported in appreciable quantities, with especially large volumes of eel harvested – 60 tons exported to the Far East in the period 2009‒2011. | What's the moroccan biodiversity ? | Morocco has a large variety of Biodiversity. It has an exceptional concentrations of endemic species. The avifauna includes a total of hundred species. The Barbary lion was a subspecies native to Morocco. The other two primary predators of northern Africa, the Atlas bear and Barbary leopard, are now extinct and critically endangered, respectively
The Barbary macaque a primate endemic to Morocco. |
null | false | null | Numerous cultivars have been selected. Some varieties are edible in the crisp, firm state but it has its best flavor when allowed to rest and soften slightly after harvest. The Japanese cultivar 'Hachiya' is widely grown. The fruit has a high tannin content, which makes the unripe fruit astringent and bitter. The tannin levels are reduced as the fruit matures. Persimmons like 'Hachiya' must be completely ripened before consumption. When ripe, this fruit comprises thick, pulpy jelly encased in a waxy thin-skinned shell.[citation needed]
Commercially and in general, there are two types of persimmon fruit: astringent and non-astringent.
The heart-shaped Hachiya is the most common variety of astringent persimmon. Astringent persimmons contain very high levels of soluble tannins and are unpalatable if eaten before completely softened. The astringency of tannins is removed in various ways. Examples include ripening by exposure to light for several days and wrapping the fruit in paper (probably because this increases the ethylene concentration of the surrounding air).
The non-astringent persimmon is squat like a tomato and is most commonly sold as fuyu. Non-astringent persimmons are not actually free of tannins as the term suggests but rather are far less astringent before ripening and lose more of their tannic quality sooner. Non-astringent persimmons may be consumed when still very firm and remain edible when very soft. | What type of persimmon can be eaten raw? | Non-astrigent persimmon like fuyu can be eaten raw while astringent persimmon like Hachiya needs to be ripe to be eaten. |
null | false | 200 | Recent studies have shown the vulnerability of ML models to adversarial attacks, small perturbations which lead to misclassification of inputs. Adversarial example generation in NLP BIBREF0 is more challenging than in common computer vision tasks BIBREF1, BIBREF2, BIBREF3 due to two main reasons: the discrete nature of input space and ensuring semantic coherence with the original sentence. A major bottleneck in applying gradient based BIBREF4 or generator model BIBREF5 based approaches to generate adversarial examples in NLP is the backward propagation of the perturbations from the continuous embedding space to the discrete token space.
Recent works for attacking text models rely on introducing errors at the character level in words BIBREF6, BIBREF7 or adding and deleting words BIBREF8, BIBREF9, BIBREF10, etc. for creating adversarial examples. These techniques often result in adversarial examples which are unnatural looking and lack grammatical correctness, and thus can be easily identified by humans.
TextFooler BIBREF11 is a black-box attack, that uses rule based synonym replacement from a fixed word embedding space to generate adversarial examples. These adversarial examples do not account for the overall semantics of the sentence, and consider only the token level similarity using word embeddings. This can lead to out-of-context and unnaturally complex replacements (see Table ), which can be easily identifiable by humans.
The recent advent of powerful language models BIBREF12, BIBREF13 in NLP has paved the way for using them in various downstream applications. In this paper, we present a simple yet novel technique: BAE (BERT-based Adversarial Examples), which uses a language model (LM) for token replacement to best fit the overall context. We perturb an input sentence by either replacing a token or inserting a new token in the sentence, by means of masking a part of the input and using a LM to fill in the mask (See Figure FIGREF1). BAE relies on the powerful BERT masked LM for ensuring grammatical correctness of the adversarial examples. Our attack beats the previous baselines by a large margin and confirms the inherent vulnerabilities of modern text classification models to adversarial attacks. Moreover, BAE produces more richer and natural looking adversarial examples as it uses the semantics learned by a LM.
To the best of our knowledge, we are the first to use a LM for adversarial example generation. We summarize our major contributions as follows:
We propose BAE, a novel strategy for generating natural looking adversarial examples using a masked language model.
We introduce 4 BAE attack modes, all of which are almost always stronger than previous baselines on 7 text classification datasets.
We show that, surprisingly, just a few replace/insert operations can reduce the accuracy of even a powerful BERT-based classifier by over $80\%$ on some datasets.
We introduce 4 BAE attack modes by replacing and inserting tokens, all of which are almost always stronger than previous baselines on 7 text classification datasets. | Are all the 4 BAE attack modes almost always stronger than previous baselines on 7 text classification datasets? | Yes, they are. |
null | false | null | Who won the March Madness Mens Basketball Tournament in 2022? | The University Kansas Jayhawks | |
null | false | 7 | The Flickr30K dataset BIBREF0 is a collection of over 30,000 images with 5 crowdsourced descriptions each. It is commonly used to train and evaluate neural network models that generate image descriptions (e.g. BIBREF2 ). An untested assumption behind the dataset is that the descriptions are based on the images, and nothing else. Here are the authors (about the Flickr8K dataset, a subset of Flickr30K):
“By asking people to describe the people, objects, scenes and activities that are shown in a picture without giving them any further information about the context in which the picture was taken, we were able to obtain conceptual descriptions that focus only on the information that can be obtained from the image alone.” BIBREF1
What this assumption overlooks is the amount of interpretation or recontextualization carried out by the annotators. Let us take a concrete example. Figure FIGREF1 shows an image from the Flickr30K dataset.
This image comes with the five descriptions below. All but the first one contain information that cannot come from the image alone. Relevant parts are highlighted in bold:
We need to understand that the descriptions in the Flickr30K dataset are subjective descriptions of events. This can be a good thing: the descriptions tell us what are the salient parts of each image to the average human annotator. So the two humans in Figure FIGREF1 are relevant, but the two soap dispensers are not. But subjectivity can also result in stereotypical descriptions, in this case suggesting that the male is more likely to be the manager, and the female is more likely to be the subordinate. rashtchian2010collecting do note that some descriptions are speculative in nature, which they say hurts the accuracy and the consistency of the descriptions. But the problem is not with the lack of consistency here. Quite the contrary: the problem is that stereotypes may be pervasive enough for the data to be consistently biased. And so language models trained on this data may propagate harmful stereotypes, such as the idea that women are less suited for leadership positions.
This paper aims to give an overview of linguistic bias and unwarranted inferences resulting from stereotypes and prejudices. I will build on earlier work on linguistic bias in general BIBREF3 , providing examples from the Flickr30K data, and present a taxonomy of unwarranted inferences. Finally, I will discuss several methods to analyze the data in order to detect biases.
Rashtchian et al. (2010) do note that some descriptions are speculative in nature, which they say hurts the accuracy and the consistency of the descriptions. But the problem is not with the lack of consistency here. Quite the contrary: the problem is that stereotypes may be pervasive enough for the data to be consistently biased. | What are the main problems with the description? | The problem is that stereotypes may be pervasive enough for the data to be consistently biased. |
null | false | 484 | As presented in this section, we conducted two series of experiments to analyze the property of IMTC. First, we qualitatively evaluated the diversity of options learned by IMTC with intrinsic rewards, without any extrinsic rewards. Second, we quantitatively test the reusability of learned options by). Also, we can see that the magnitude of intra-option policies tends larger with constant rewards. Right: Options learned by other methods. OC produces a dead option 3 that terminates everywhere and never-ending options 0 and 1. About intra-option policies, all methods successfully avoided learning the same policy, but they only have two directions.
task adaptation on a specific task. As a baseline of termination learning method, we compared our method with OC. OC is trained with VIC rewards during pre-training. We did not compare IMTC with TC because our TC implementation failed to learn options with relatively small termination regions as reported in the paper, and there is no official public code for TC 1 . During pre-training without extrinsic rewards, IMTC receives intrinsic rewards when the current option terminates. We compare three IMTC variants with different intrinsic rewards: (i) VIC, (ii) RVIC, and (iii) constant value (R IMTC = 0.01). Note that R IMTC = 0.01 is chosen from [0.1, 0.05, 0.01] based on the task adaptation results. We also compare IMTC with vanilla VIC and RVIC with fixed termination probabilities. We used ∀ x β o (x) = 0.1 since it performed the best in task adaptation experiments, while 0.05 was used in. Note that RVIC's objective I(X s ; O|x f ) is different from ours, while IMTC and VIC share almost the same objective. Thus, the use of VIC is more natural, and the combination with RVIC is tested to show the applicability of IMTC. Further details of our VIC and RVIC implementation are found in Appendix B. In order to check only the effect of the different methods for learning beta β o , the rest of the implementation is the same for all these methods. That is, OC, vanilla VIC, and vanilla RVIC are also based on PPO and advantage estimation methods in Section 4.2. In this section, we fix the number of options as |O| = 4 for all option-learning methods. We further investigated the effect of the number of options Appendix C, where we confirmed that |O| = 4 is sufficient for most domains. All environments that we used for experiments are implemented on the MuJoCo physics simulator. We further describe the details in Appendix C.
Option Learning From Intrinsic Rewards We now qualitatively compare the options learned by IMTC with options of other methods. Learned options depend on the reward structure in the environment, which enables manually designing good reward functions for learning diverse options. Thus, we employed a reward-free RL setting where no reward is given to agents. Instead, each compared method uses some intrinsic rewards, as explained. We fix µ as µ(o|x) = 1 |O| in this experiment, since we assume that the future tasks are uniformly distributed. Intra-option policies are trained by PPO and independent GAE (8). We show network architectures and hyperparameters in Appendix C. We set the episode length to 1 × 10 4 , i.e., an agent is reset to its starting position after 1 × 10 4 steps. For all visualizations, we chose the best one from five independent runs with different random seeds. We visualized learned options in PointReach environment shown in Figure. In this environment, an agent controls the ball initially placed at the center of the room. The state space consists of positions (x, y) and velocities (∆x, ∆y) of an agent, and the action space consists of acceralations ( ∆x ∆t , ∆y ∆t ). Figure shows the options learned in this environment after 4 × 10 6 steps. Each arrow represents the mean value of intra-option policies, and the heatmaps represent β o . In this experiment, we observed the effect of IMTC clearly, for both termination regions and intra-option policies. Interestingly, we don't see clear differences between options learned with VIC and RVIC rewards, while constant rewards tend to make options peaker. OC failed to learn meaningful termination regions: option 0 and 1 never terminate, and option 3 terminates almost everywhere. This result confirms that IMTC can certainly diversify options. Moreover, compared to vanilla VIC and RVIC, intra-option policies learned by IMTC with VIC or RVIC rewards are clearer, in terms of both the magnitude and directions of policies. We believe that this is because diversifying termination regions gives more biased samples to the option classifiers employed by VIC and RVIC. Figure Transferring skills via task adaptation Now we quantitatively test the reusability of learned options by task adaptation with specific reward functions. Specifically, we first trained agents with intrinsic rewards as per the previous section. Then we transferred agents to an environment with the same state and action space but with external rewards. We prepared multiple reward functions, which we call tasks, for each domain and evaluated the averaged performance over tasks. We compare IMTC with OC, vanilla VIC, vanilla RVIC, and PPO without pre-training. Also, we compare three variants of IMTC with different intrinsic rewards during pre-training. For a fair comparison, UGAE (9) and PPO are used for all options learning methods. Note that we found UGAE is very effective in this experiments, as we show the ablation study in Appendix C.6. For vanilla VIC and vanilla RVIC, termination probability is fixed to 0.1 through pre-training and task adaptation. -greedy based on Q O with = 0.1 is used as the option selection policy µ. We hypothesize that diverse options learned by IMTC can help quickly adapt to given tasks, supposing the diversity of tasks.
Figure shows all domains used for task adaptation experiments. For simplicity, all tasks have goalbased sparse reward functions. I.e., an agent receives R t = 1.0 when it satisfies a goal condition, and otherwise the control cost −0.0001 is given. Red circles show possible goal locations for each task. When the agent fails to reach the goal after 1000 steps, it is reset to a starting position. PointReach, SwimmerReach, and AntReach are simple navigation tasks where an agent aim to just navigate itself to the goal. We also prepared tasks with object manipulation: in PointBilliard and AntBilliard an agent aims to kick the blue ball to the goal position, and in PointPush and AntPush, it has to push the block out of the way to the goal. We pre-traine options learning agents for 4 × 10 6 environmental steps and additionally trained them for 1 × 10 6 steps for each task. Figure shows learning curves and scatter plots drawn from five independent runs with different random seeds per domain. 2 Here, we observed that IMTC with VIC or RVIC rewards performed the best or was compatible with baselines. IMTC with VIC performed better than OC with VIC except for AntRearch, which backs up the effectiveness of diversifying termination regions for learning reusable options. Also, IMTC with VIC and IMTC with RCIC respectively performed better in most of the tasks than VIC and RVIC with fixed termination probabilities. This result suggests that IMTC can boost the performance of option learning methods based on option classifiers, even when the objective is different as with RVIC. On the other hand, IMTC with constant rewards (R IMTC = 0.01) performed worse than IMTC with VIC or RVIC rewards, although it also learned diverse options as we show in Figure, suggesting the importance of adjusting rewards. We further analyzed the evolution of intrinsic rewards of VIC and RVIC in Appendix C.5. In addition, we can observe that IMTC's performance is especially better than other methods in relatively complex PointBilliard, AntBilliard, and AntPush, where object manipulation is required. Considering that manipulated balls and boxes move faster than agents in these domains, a choice of options can lead to larger differences in the future state. IMTC is suitable to these domains since it maximizes the diversity of the resulting states, while PPO struggles to learn. Contrary, IMTC's performance is close to other methods in Reach tasks, where the goal states are relatively close to the starting states in terms of euclidian distances.
Gridworld experiments and limitation of the method Although IMTC successfully learned diverse options in MuJoCo experiments, our analysis in Figure shows the possibility of learning options that are not interesting but have large MI. We further investigated this possibility by visualizing options in a classical four rooms gridworld in Appendix C.8. Interestingly, we observed that IMTC could fall into diverse but unmeaningful options in that environment. We believe that IMTC is often sufficient in a large environment where a randomly-initialized agent rarely gets the same trajectory. However, when the number of possible trajectories is small, diversifying the destinations could be insufficient. In such cases, it can be necessary to extend IMTC to diversify trajectories as done in.
Figure 5 shows learning curves and scatter plots drawn from five independent runs with different random seeds per domain. | What are the error bars in Fig 5? | We did not show error bars in Fig 5, but just showed scatter plots and learning curves regressed based on the resulting episodic returns. We will add 95% confidence intervals for both in the revised manuscript. |
null | false | null | Divide these sneakers into Air Jordan and Air Max: Tokyo 96, Big Bubble Red, Collector's Dream, Monarch, Chicago, Lucky Green | Air Jordan: Tokyo 96, Chicago, Lucky Green
Air Max: Big Bubble Red, Collector's Dream, Monarch | |
null | false | null | Classify the following cartridge types by whether they are typically used in a pistol or a rifle: 9mm, .40 S&W, 5.56mm NATO, .45 ACP, 7.62mm NATO, 6.5mm Creedmoor, 6.5mm Grendel, .357 Magnum | 9mm - Pistol
.40 S&W - Pistol
5.56mm NATO - Rifle
.45 ACP - Pistol
7.62mm NATO - Rifle
6.5mm Creedmoor - Rifle
6.5mm Grendel - Rifle
.357 Magnum - Pistol | |
1804.04225 | false | null | The overview of our approach is shown in Figure FIGREF6 . Within ICU notes (e.g., text example in top-left box in Figure 2), we first identify all abbreviations using regular expressions and then try to find all possible expansions of these abbreviations from domain-specific knowledge base as candidates. We train word embeddings using the clinical notes data with task-oriented resources such as Wikipedia articles of candidates and medical scientific papers and compute the semantic similarity between an abbreviation and its candidate expansions based on their embeddings (vector representations of words).
Within ICU notes (e.g., text example in top-left box in Figure 2), we first identify all abbreviations using regular expressions and then try to find all possible expansions of these abbreviations from domain-specific knowledge base as candidates. | How do they identify abbreviations? | The answers are shown as follows:
* identify all abbreviations using regular expressions
|
null | false | null | Gross domestic product (GDP) is a monetary measure of the market value of all the final goods and services produced and sold in a specific time period by a country or countries, generally "without double counting the intermediate goods and services used up to produce them".[citation needed] GDP is most often used by the government of a single country to measure its economic health. Due to its complex and subjective nature, this measure is often revised before being considered a reliable indicator.[citation needed] GDP (nominal) per capita does not, however, reflect differences in the cost of living and the inflation rates of the countries; therefore, using a basis of GDP per capita at purchasing power parity (PPP) may be more useful when comparing living standards between nations, while nominal GDP is more useful comparing national economies on the international market. Total GDP can also be broken down into the contribution of each industry or sector of the economy. The ratio of GDP to the total population of the region is the per capita GDP (also called the Mean Standard of Living).
GDP definitions are maintained by a number of national and international economic organizations. The Organisation for Economic Co-operation and Development (OECD) defines GDP as "an aggregate measure of production equal to the sum of the gross values added of all resident and institutional units engaged in production and services (plus any taxes, and minus any subsidies, on products not included in the value of their outputs)". An IMF publication states that, "GDP measures the monetary value of final goods and services—that are bought by the final user—produced in a country in a given period of time (say a quarter or a year)."
GDP is often used as a metric for international comparisons as well as a broad measure of economic progress. It is often considered to be the world's most powerful statistical indicator of national development and progress. However, critics of the growth imperative often argue that GDP measures were never intended to measure progress, and leave out key other externalities, such as resource extraction, environmental impact and unpaid domestic work. Critics frequently propose alternative economic models such as doughnut economics which use other measures of success or alternative indicators such as the OECD's Better Life Index as better approaches to measuring the effect of the economy on human development and well being. | Summarize what GDP means as well as what it is and is not used for. | Gross Domestic Product (GDP) is a measurement of all of the goods and services sold in a specific country over a specific time period. It is used as a metric for economic health, economic progress, and for international comparisons. GDP is not a used to measure cost of living differences and inflation rates between countries. |
null | false | null | What is detection engineering and what are the benefits? | Detection engineering is a new approach to threat detection. More than just writing detection rules, detection engineering is a process—applying systems thinking and engineering to more accurately detect threats. The goal is to create an automated system of threat detection which is customizable, flexible, repeatable, and produces high quality alerts for security teams to act upon. Detection engineering is not yet a mature discipline with consistent methods and predictable results, but pioneers in the field are working towards this goal. There is not a lot written about the concept right now, no go-to standards or frameworks. It appears companies are developing their own approaches to incorporate the concept to improve their detection and response capabilities.
A concept called detection-as-code (DaC), first coined by Anton Chuvakin in 2020, is at the heart of detection engineering, and points to the idea that detections should be treated as code. Essentially, it is about applying software engineering best practices to detections using modern agile CI/CD processes.
The benefits Of Detection Engineering include 1) reduced mean-time-to-response through automation of detections, 2) detections are more relevant to the environment, 3) the process is structured, repeatable, and informs workflows, 4) detection rules and analytics are version controlled, reused, and modified as needed, 5) Peer review and automated testing are part of the process, to catch mistakes and gaps. | |
null | false | null | Say that you take 1 step left, then 2 forward, then 4 right, then 3 right, then 3 forward, then 5 back. Take me through the process of determining how far you end up from your starting position. | First, we find the displacement from the starting position, which we will use to calculate the distance. To do this, collect the steps into groups by direction.
Left: we take 1 step left
Right: we take 4 + 3 = 7 steps right
Forward: we take 2 + 3 = 5 steps forward
Back: we take 5 steps back
We can subtract step counts in opposite directions to get our final displacement
Left/Right: we take 1 step left and 7 right, so we take a total of 1 - 7 = -6 steps left. This is equivalent to 6 steps right.
Forward/Back: we take 5 steps forward and 5 steps back, so we take a total of 5 - 5 = 0 steps forward.
This means we end up 5 steps to the right and 0 steps forward from where we started, which is a distance of 5 steps. | |
null | false | 52 | We conducted two feature study experiments: 1) a feature ablation study to assess the contribution of feature groups and 2) a feature elimination study to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy.
We conducted two feature study experiments: 1) a feature ablation study to assess the contribution of feature groups and 2) a feature elimination study to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy. | What are the two feature studies we did? | 1) a feature ablation study to assess the contribution of feature groups and 2) a feature elimination study to determine the optimal percentile of top ranked features for classifying Twitter tweets in the depression schema hierarchy. |
null | false | null | Traditional dried fruit such as raisins, figs, dates, apricots and apples have been a staple of Mediterranean diets for millennia. This is due partly to their early cultivation in the Middle Eastern region known as the Fertile Crescent, made up by parts of modern Iran, Iraq, southwest Turkey, Syria, Lebanon, Palestine, Israel, and northern Egypt. Drying or dehydration also happened to be the earliest form of food preservation: grapes, dates, and figs that fell from the tree or vine would dry in the hot sun. Early hunter-gatherers observed that these fallen fruit took on an edible form, and valued them for their stability as well as their concentrated sweetness.
The earliest recorded mention of dried fruits can be found in Mesopotamian tablets dating to about 1500 BC, which contain what are probably the oldest known written recipes. These clay slabs, written in Akkadian, the daily language of Babylonia, were inscribed in cuneiform and tell of diets based on grains (barley, millet, wheat), vegetables and fruits such as dates, figs, apples, pomegranates, and grapes. These early civilizations used dates, date juice evaporated into syrup and raisins as sweeteners. They included dried fruits in their breads for which they had more than 300 recipes, from simple barley bread for the workers to very elaborate, spiced cakes with honey for the palaces and temples.
The date palm was one of the first cultivated trees. It was domesticated in Mesopotamia more than 5,000 years ago. It grew abundantly in the Fertile Crescent and it was so productive (an average date palm produces 50 kg (100 lbs) of fruit a year for 60 years or more) that dates were the cheapest of staple foods. Because they were so valuable, they were well recorded in Assyrian and Babylonian monuments and temples. The villagers in Mesopotamia dried them and ate them as sweets. Whether fresh, soft-dried or hard-dried, they helped to give character to meat dishes and grain pies. They were valued by travelers for their energy and were recommended as stimulants against fatigue.
Figs were also prized in early Mesopotamia, Palestine, Israel, and Egypt where their daily use was probably greater than or equal to that of dates. As well as appearing in wall paintings, many specimens have been found in Egyptian tombs as funerary offerings. In Greece and Crete, figs grew very readily and they were the staple of poor and rich alike, particularly in their dried form.
Grape cultivation first began in Armenia and the eastern regions of the Mediterranean in the 4th century BC. Raisins were produced by drying grapes in the hot desert sun. Very quickly, viticulture and raisin production spread across northern Africa including Morocco and Tunisia. The Phoenicians and the Egyptians popularized the production of raisins, probably due to the perfect arid environment for sun drying. They put them in jars for storage and allotted them to the different temples by the thousands. They also added them to breads and various pastries, some made with honey, some with milk and eggs.
From the Middle East, these fruits spread through Greece to Italy where they became a major part of the diet. Ancient Romans consumed raisins in spectacular quantities and at all levels of society, including them as a key part of their common meals, along with olives and fresh fruits. Raisined breads were common for breakfast and were consumed with their grains, beans, and cultured milks. Raisins were so valued that they transcended the food realm and became rewards for successful athletes, as well as premium barter currency.
Having dried fruits was essential in ancient Rome as these instructions for housekeepers around 100 BC tell: "She must keep a supply of cooked food on hand for you and the servants. She must keep many hens and have plenty of eggs. She must have a large store of dried pears, sorbs, figs, raisins, sorbs in must, preserved pears and grapes and quinces. She must also keep preserved grapes in grape-pulp and in pots buried in the ground, as well as fresh Praenestine nuts kept in the same way, and Scantian quinces in jars, and other fruits that are usually preserved, as well as wild fruits. All these she must store away diligently every year."
Figs were also extremely popular in Rome. Dried figs were added to bread and formed a major part of the winter food of country people. They were rubbed with spices such as cumin, anise and fennel seeds, or toasted sesame, wrapped in fig leaves and stored in jars. Today, major producing regions include Israel, Jerusalem, Gaza and many other Arabic countries. Dried figs are rich in vitamins, phosphorus and various other important minerals.
Plums, apricots and peaches had their origins in Asia. They were domesticated in China in the 3rd millennium BC and spread to the Fertile Crescent where they were also very popular, fresh and dried alike. They arrived in Greece and Italy much later and were very expensive but valued in the preparation of gourmet dishes with port or stewed with honey and spices. | Which countries today were commonly part of the Fertile Crescent in history? | The Middle Eastern region known as the Fertile Crescent, is made up by parts of modern Iran, Iraq, southwest Turkey, Syria, Lebanon, Palestine, Israel, and northern Egypt. |
null | false | null | Here's a list of the key characters in the TV Series Picard who also appeared in another Star Trek franchise series': Jean-Luc Picard, Data, Seven of Nine, Deanna Troi, William Riker, Q, Borg Queen, and Wesley Crusher.
List the characters, sorted by the original series in which they appeared. | All of the characters first appeared on either The Next Generation or Voyager.
The Next Generation: Jean-Luc Picard, Data, Deanna Troi, William Riker, Q, and Wesley Crusher.
Voyager: Seven of Nine and Borg Queen | |
null | false | null | What are a lot of different options for traveling across New York City? | You could bike, ride a motorcycle, drive a car, take a bus, use the subway or hail a cab | |
null | false | null | What is the capital of France? | The capital of France is Paris | |
null | false | null | Where is the best place to see the northern lights? | The northern lights are experienced most frequently in northern Scandinavia. Tromsø in Norway, Lapland in Sweden and Reykjavik in Iceland are recommended locations. The Aurora Borealis is brightest between the months of September and April. | |
1701.08118 | false | null | To find a large amount of hate speech on the refugee crisis, we used 10 hashtags that can be used in an insulting or offensive way. Using these hashtags we gathered 13 766 tweets in total, roughly dating from February to March 2016. However, these tweets contained a lot of non-textual content which we filtered out automatically by removing tweets consisting solely of links or images. We also only considered original tweets, as retweets or replies to other tweets might only be clearly understandable when reading both tweets together. In addition, we removed duplicates and near-duplicates by discarding tweets that had a normalised Levenshtein edit distance smaller than .85 to an aforementioned tweet. A first inspection of the remaining tweets indicated that not all search terms were equally suited for our needs. The search term #Pack (vermin or lowlife) found a potentially large amount of hate speech not directly linked to the refugee crisis. It was therefore discarded. As a last step, the remaining tweets were manually read to eliminate those which were difficult to understand or incomprehensible. After these filtering steps, our corpus consists of 541 tweets, none of which are duplicates, contain links or pictures, or are retweets or replies.
To find a large amount of hate speech on the refugee crisis, we used 10 hashtags that can be used in an insulting or offensive way. | How were potentially hateful messages identified? | The answers are shown as follows:
* 10 hashtags that can be used in an insulting or offensive way
|
null | false | null | Identify which instrument is string or percussion: Ngoma drums, Zhuihu | Zhuihu is string, Ngoma drums is percussion. | |
null | false | null | The Epic Poker League was a series of poker tournaments which took place in 2011, organised by Federated Sports + Gaming. Former World Series of Poker commissioner Jeffrey Pollack served as Executive Chairman, professional poker player Annie Duke was Commissioner, and Matt Savage was Tournament Director. The three events held took place at the Palms Casino Resort in Las Vegas, Nevada. Season One received television coverage on CBS and Velocity Network.
The League was exclusionary in that it was only open to poker players who met certain qualification criteria for multimillion-dollar lifetime earnings, multiple wins and/or cashes at high level tournaments, and who were supposed to maintain adherence to a specifically defined Code of Conduct.
On 29 February 2012, Federated Sports + Gaming announced that it had filed for Chapter 11 bankruptcy. The Epic Poker League brand was among the assets acquired by Pinnacle Entertainment, Inc., at a bankruptcy auction in June 2012. | Who was involved with the epic poker league? | The following people were involved with the Epic Poker League:
- Former World Series of Poker commissioner Jeffrey Pollack (Executive Chairman)
- Professional poker player Annie Duke (Commissioner)
- Matt Savage (Tournament Director) |
null | false | 29 | Question classification (QC) deals with question analysis and question labeling based on the expected answer type. The goal of QC is to assign classes accurately to the questions based on expected answer. In modern system, there are two types of questions BIBREF0. One is Factoid question which is about providing concise facts and another one is Complex question that has a presupposition which is complex. Question Answering (QA) System is an integral part of our daily life because of the high amount of usage of Internet for information acquisition. In recent years, most of the research works related to QA are based on English language such as IBM Watson, Wolfram Alpha. Bengali speakers often fall in difficulty while communicating in English BIBREF1.
In this research, we briefly discuss the steps of QA system and compare the performance of seven machine learning based classifiers (Multi-Layer Perceptron (MLP), Naive Bayes Classifier (NBC), Support Vector Machine (SVM), Gradient Boosting Classifier (GBC), Stochastic Gradient Descent (SGD), K Nearest Neighbour (K-NN) and Random Forest (RF)) in classifying Bengali questions to classes based on their anticipated answers. Bengali questions have flexible inquiring ways, so there are many difficulties associated with Bengali QC BIBREF0. As there is no rich corpus of questions in Bengali Language available, collecting questions is an additional challenge. Different difficulties in building a QA System are mentioned in the literature BIBREF2 BIBREF3. The first work on a machine learning based approach towards Bengali question classification is presented in BIBREF0 that employ the Stochastic Gradient Descent (SGD).
In this research, we briefly discuss the steps of QA system and compare the performance of seven machine learning based classifiers (Multi-Layer Perceptron (MLP), Naive Bayes Classifier (NBC), Support Vector Machine (SVM), Gradient Boosting Classifier (GBC), Stochastic Gradient Descent (SGD), K Nearest Neighbour (K-NN) and Random Forest (RF)) in classifying Bengali questions to classes based on their anticipated answers. | In this research, based on what to compare the performance of seven classifiers? | Based on their anticipated answers is compared. |
null | false | null | Husinec is located about 6 kilometres (4 mi) north of Prague. It lies in the Prague Plateau. It it situated in a meander of the Vltava River, partly in the valley of the river and partly on a promontory above the valley.
The municipality is known for high average temperatures, which are caused by the specific relief of the landscape and the natural conditions of the river valley. Drought-tolerant and heat-tolerant plants typical of subtropical climates thrive here. On 19 June 2022, the highest June temperature in the Czech Republic was recorded here, namely 39.0 °C (102.2 °F). | Given this reference text about Husinec, what was the highest temperature reached in Fahrenheit? | 102.2 degrees Fahrenheit |
1907.08937 | false | null | In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. Relation prediction aims at predicting the relationship between entities with a given set of triples as training data; while relation extraction aims at extracting the relationship between two entities in a sentence.
We hope to design a simple and clear experiment setup to conduct error analysis for relational prediction. Therefore, we consider a typical method TransE BIBREF3 as the subject as well as FB15K BIBREF3 as the dataset. TransE embeds entities and relations as vectors, and train these embeddings by minimizing DISPLAYFORM0
For relation extraction, we consider the supervised relation extraction setting and TACRED dataset BIBREF10 . As for the subject model, we use the best model on TACRED dataset — position-aware neural sequence model. This method first passes the sentence into an LSTM and then calculate an attention sum of the hidden states in the LSTM by taking positional features into account. This simple and effective method achieves the best in TACRED dataset.
In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction.
We hope to design a simple and clear experiment setup to conduct error analysis for relational prediction. Therefore, we consider a typical method TransE BIBREF3 as the subject as well as FB15K BIBREF3 as the dataset.
For relation extraction, we consider the supervised relation extraction setting and TACRED dataset BIBREF10 . As for the subject model, we use the best model on TACRED dataset — position-aware neural sequence model. | Which competitive relational classification models do they test? | For relation prediction they test TransE and for relation extraction they test position aware neural sequence model |
1911.03310 | false | null | Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.
We use a pre-trained mBERT model that was made public with the BERT release. The model dimension is 768, hidden layer dimension 3072, self-attention uses 12 heads, the model has 12 layers. It uses a vocabulary of 120k wordpieces that is shared for all languages.
To train the language identification classifier, for each of the BERT languages we randomly selected 110k sentences of at least 20 characters from Wikipedia, and keep 5k for validation and 5k for testing for each language. The training data are also used for estimating the language centroids.
Results ::: Word Alignment.
Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance.
FLOAT SELECTED: Table 4: Maximum F1 score for word alignment across layers compared with FastAlign baseline.
Following BIBREF3, we hypothesize that a sentence representation in mBERT is composed of a language-specific component, which identifies the language of the sentence, and a language-neutral component, which captures the meaning of the sentence in a language-independent way. We assume that the language-specific component is similar across all sentences in the language.
We use a pre-trained mBERT model that was made public with the BERT release. The model dimension is 768, hidden layer dimension 3072, self-attention uses 12 heads, the model has 12 layers. It uses a vocabulary of 120k wordpieces that is shared for all languages.
To train the language identification classifier, for each of the BERT languages we randomly selected 110k sentences of at least 20 characters from Wikipedia, and keep 5k for validation and 5k for testing for each language. The training data are also used for estimating the language centroids.
Results ::: Word Alignment.
Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance.
FLOAT SELECTED: Table 4: Maximum F1 score for word alignment across layers compared with FastAlign baseline. | How they demonstrate that language-neutral component is sufficiently general in terms of modeling semantics to allow high-accuracy word-alignment? | The answers are shown as follows:
* Table TABREF15 shows that word-alignment based on mBERT representations surpasses the outputs of the standard FastAlign tool even if it was provided large parallel corpus. This suggests that word-level semantics are well captured by mBERT contextual embeddings. For this task, learning an explicit projection had a negligible effect on the performance.
|
1709.06671 | false | null | To overcome the above-mentioned challenges, we propose a locally-linear meta-embedding learning method that (a) requires only the words in the vocabulary of each source embedding, without having to predict embeddings for missing words, (b) can meta-embed source embeddings with different dimensionalities, (c) is sensitive to the diversity of the neighbourhoods of the source embeddings.
Our proposed method comprises of two steps: a neighbourhood reconstruction step (Section "Nearest Neighbour Reconstruction" ), and a projection step (Section "Projection to Meta-Embedding Space" ). In the reconstruction step, we represent the embedding of a word by the linearly weighted combination of the embeddings of its nearest neighbours in each source embedding space. Although the number of words in the vocabulary of a particular source embedding can be potentially large, the consideration of nearest neighbours enables us to limit the representation to a handful of parameters per each word, not exceeding the neighbourhood size. The weights we learn are shared across different source embeddings, thereby incorporating the information from different source embeddings in the meta-embedding. Interestingly, vector concatenation, which has found to be an accurate meta-embedding method, can be derived as a special case of this reconstruction step.
To overcome the above-mentioned challenges, we propose a locally-linear meta-embedding learning method that (a) requires only the words in the vocabulary of each source embedding, without having to predict embeddings for missing words, (b) can meta-embed source embeddings with different dimensionalities, (c) is sensitive to the diversity of the neighbourhoods of the source embeddings.
Our proposed method comprises of two steps: a neighbourhood reconstruction step (Section "Nearest Neighbour Reconstruction" ), and a projection step (Section "Projection to Meta-Embedding Space" ). In the reconstruction step, we represent the embedding of a word by the linearly weighted combination of the embeddings of its nearest neighbours in each source embedding space. | What is the introduced meta-embedding method introduced in this paper? | The answers are shown as follows:
* proposed method comprises of two steps: a neighbourhood reconstruction step (Section "Nearest Neighbour Reconstruction" ), and a projection step (Section "Projection to Meta-Embedding Space" ). In the reconstruction step, we represent the embedding of a word by the linearly weighted combination of the embeddings of its nearest neighbours in each source embedding space.
|
null | false | null | What factors should I consider before becoming a dog owner? | Here are some questions you may ask yourself before becoming a dog owner:
1. Whether you have time to spend with the new family member daily. Dogs are social animals who need accompany just like human beings. You will need to spend time on walking, training and playing with your dogs every day.
2. Whether you have the financial ability to raise a dog. It's quite expensive to raise dogs. You will need to spend money on dog food, toys, pet necessities, vet bills, grooming, etc.
3. Whether you are prepared if you are a person who loves to travel. It will become harder to travel after having dogs.
4. Whether you are allergic to dogs.
5. Whether your current living condition is suitable for raising dogs. You also need to research what dog breeds are suitable for you, and dogs from different breeds have very different personalities. Is your home big enough for dogs of the breed you want to have? Are you living in a neighborhood that allows you to walk dogs easily and offers convenient access to facilities like dog parks? Do you have kids?
6. Whether you are prepared for dog training. Dogs need proper training to have good manners, which takes time, patience and persistence. | |
null | false | 34 | We evaluate the proposed model on the SimpleQuestions dataset BIBREF0 . The dataset consists of 108,442 single-relation questions and their corresponding (topic entity, predicate, answer entity) triples from Freebase. It is split into 75,910 train, 10,845 validation, and 21,687 test questions. Only 10,843 of the 45,335 unique words in entity aliases and 886 out of 1,034 unique predicates in the test set were present in the train set. For the proposed dataset, there are two evaluation settings, called FB2M and FB5M, respectively. The former uses a KB for candidate generation which is a subset of Freebase and contains 2M entities, while the latter uses subset of Freebase with 5M entities.
In our experiments, the Memory Neural Networks (MemNNs) proposed in babidataset serve as the baselines. For training, in addition to the 76K questions in the training set, the MemNNs use 3K training questions from WebQuestions BIBREF27 , 15M paraphrases from WikiAnswers BIBREF2 , and 11M and 12M automatically generated questions from the KB for the FB2M and FB5M settings, respectively. In contrast, our models are trained only on the 76K questions in the training set.
For our model, both layers of the LSTM-based question encoder have size 200. The hidden layers of the LSTM-based decoder have size 100, and the CNNs for entity and predicate embeddings have a hidden layer of size 200 and an output layer of size 100. The CNNs for entity and predicate embeddings use a receptive field of size 4, $\lambda =5$ , and $m=100$ . We train the models using RMSProp with a learning rate of $1e^{-4}$ .
In order to make the input character sequence long enough to fill up the receptive fields of multiple CNN layers, we pad each predicate or entity using three padding symbols $P$ , a special start symbol, and a special end symbol. For instance, $Obama$ would become $S_{start}PPP ObamaPPPS_{end}$ . For consistency, we apply the same padding to the questions.
For the proposed dataset, there are two evaluation settings, called FB2M and FB5M, respectively. The former uses a KB for candidate generation which is a subset of Freebase and contains 2M entities, while the latter uses subset of Freebase with 5M entities. | What evaluation settings are used for the proposed dataset? | Two evaluation settings, called FB2M and FB5M, respectively. |
1606.00189 | true | null | Grammatical error correction (GEC) is a challenging task due to the variability of the type of errors and the syntactic and semantic dependencies of the errors on the surrounding context. Most of the grammatical error correction systems use classification and rule-based approaches for correcting specific error types. However, these systems use several linguistic cues as features. The standard linguistic analysis tools like part-of-speech (POS) taggers and parsers are often trained on well-formed text and perform poorly on ungrammatical text. This introduces further errors and limits the performance of rule-based and classification approaches to GEC. As a consequence, the phrase-based statistical machine translation (SMT) approach to GEC has gained popularity because of its ability to learn text transformations from erroneous text to correct text from error-corrected parallel corpora without any additional linguistic information. They are also not limited to specific error types. Currently, many state-of-the-art GEC systems are based on SMT or use SMT components for error correction BIBREF0 , BIBREF1 , BIBREF2 . In this paper, grammatical error correction includes correcting errors of all types, including word choice errors and collocation errors which constitute a large class of learners' errors.
We model our GEC system based on the phrase-based SMT approach. However, traditional phrase-based SMT systems treat words and phrases as discrete entities. We take advantage of continuous space representation by adding two neural network components that have been shown to improve SMT systems BIBREF3 , BIBREF4 . These neural networks are able to capture non-linear relationships between source and target sentences and can encode contextual information more effectively. Our experiments show that the addition of these two neural networks leads to significant improvements over a strong baseline and outperforms the current state of the art.
To train NNJM, we use the publicly available implementation, Neural Probabilistic Language Model (NPLM) BIBREF14 . The latest version of Moses can incorporate NNJM trained using NPLM as a feature while decoding. Similar to NNGLM, we use the parallel text used for training the translation model in order to train NNJM. We use a source context window size of 5 and a target context window size of 4. We select a source context vocabulary of 16,000 most frequent words from the source side. The target context vocabulary and output vocabulary is set to the 32,000 most frequent words. We use a single hidden layer to speed up training and decoding with an input embedding dimension of 192 and 512 hidden layer nodes. We use rectified linear units (ReLU) as the activation function. We train NNJM with noise contrastive estimation with 100 noise samples per training instance, which are obtained from a unigram distribution. The neural network is trained for 30 epochs using stochastic gradient descent optimization with a mini-batch size of 128 and learning rate of 0.1.
Grammatical error correction (GEC) is a challenging task due to the variability of the type of errors and the syntactic and semantic dependencies of the errors on the surrounding context. Most of the grammatical error correction systems use classification and rule-based approaches for correcting specific error types. However, these systems use several linguistic cues as features. The standard linguistic analysis tools like part-of-speech (POS) taggers and parsers are often trained on well-formed text and perform poorly on ungrammatical text. This introduces further errors and limits the performance of rule-based and classification approaches to GEC. As a consequence, the phrase-based statistical machine translation (SMT) approach to GEC has gained popularity because of its ability to learn text transformations from erroneous text to correct text from error-corrected parallel corpora without any additional linguistic information.
We model our GEC system based on the phrase-based SMT approach. However, traditional phrase-based SMT systems treat words and phrases as discrete entities. We take advantage of continuous space representation by adding two neural network components that have been shown to improve SMT systems BIBREF3 , BIBREF4 .
To train NNJM, we use the publicly available implementation, Neural Probabilistic Language Model (NPLM) BIBREF14 . The latest version of Moses can incorporate NNJM trained using NPLM as a feature while decoding. Similar to NNGLM, we use the parallel text used for training the translation model in order to train NNJM. We use a source context window size of 5 and a target context window size of 4. We select a source context vocabulary of 16,000 most frequent words from the source side. The target context vocabulary and output vocabulary is set to the 32,000 most frequent words. We use a single hidden layer to speed up training and decoding with an input embedding dimension of 192 and 512 hidden layer nodes. | Do they use pretrained word representations in their neural network models? | No. |
1908.06379 | false | null | We evaluate our model on two benchmark treebanks, English Penn Treebank (PTB) and Chinese Penn Treebank (CTB5.1) following standard data splitting BIBREF30, BIBREF31. POS tags are predicted by the Stanford Tagger BIBREF32. For constituent parsing, we use the standard evalb tool to evaluate the F1 score. For dependency parsing, we apply Stanford basic dependencies (SD) representation BIBREF4 converted by the Stanford parser. Following previous work BIBREF27, BIBREF33, we report the results without punctuations for both treebanks.
Tables TABREF17, TABREF18 and TABREF19 compare our model to existing state-of-the-art, in which indicator Separate with our model shows the results of our model learning constituent or dependency parsing separately, (Sum) and (Concat) respectively represent the results with the indicated input token representation setting. On PTB, our model achieves 93.90 F1 score of constituent parsing and 95.91 UAS and 93.86 LAS of dependency parsing. On CTB, our model achieves a new state-of-the-art result on both constituent and dependency parsing. The comparison again suggests that learning jointly in our model is superior to learning separately. In addition, we also augment our model with ELMo BIBREF48 or a larger version of BERT BIBREF49 as the sole token representation to compare with other pre-training models. Since BERT is based on sub-word, we only take the last sub-word vector of the word in the last layer of BERT as our sole token representation $x_i$. Moreover, our single model of BERT achieves competitive performance with other ensemble models.
Multitask learning (MTL) is a natural solution in neural models for multiple inputs and multiple outputs, which is adopted in this work to decode constituent and dependency in a single model. BIBREF15 indicates that when tasks are sufficiently similar, especially with syntactic nature, MTL would be useful. In contrast to previous work on deep MTL BIBREF16, BIBREF17, our model focuses on more related tasks and benefits from the strong inherent relation. At last, our model is evaluated on two benchmark treebanks for both constituent and dependency parsing. The empirical results show that our parser reaches new state-of-the-art for all parsing tasks.
We evaluate our model on two benchmark treebanks, English Penn Treebank (PTB) and Chinese Penn Treebank (CTB5.1) following standard data splitting BIBREF30, BIBREF31. POS tags are predicted by the Stanford Tagger BIBREF32. For constituent parsing, we use the standard evalb tool to evaluate the F1 score. For dependency parsing, we apply Stanford basic dependencies (SD) representation BIBREF4 converted by the Stanford parser. Following previous work BIBREF27, BIBREF33, we report the results without punctuations for both treebanks.
Tables TABREF17, TABREF18 and TABREF19 compare our model to existing state-of-the-art, in which indicator Separate with our model shows the results of our model learning constituent or dependency parsing separately, (Sum) and (Concat) respectively represent the results with the indicated input token representation setting. On PTB, our model achieves 93.90 F1 score of constituent parsing and 95.91 UAS and 93.86 LAS of dependency parsing. On CTB, our model achieves a new state-of-the-art result on both constituent and dependency parsing.
At last, our model is evaluated on two benchmark treebanks for both constituent and dependency parsing. The empirical results show that our parser reaches new state-of-the-art for all parsing tasks. | How is dependency parsing empirically verified? | The answers are shown as follows:
* At last, our model is evaluated on two benchmark treebanks for both constituent and dependency parsing. The empirical results show that our parser reaches new state-of-the-art for all parsing tasks.
|
null | false | null | Give me a bulleted list of ingredients that I need to bake chewy chocolate chip cookies, and include volume measurements for any ingredient I have to measure out. | - 340 grams chocolate chips
- 250 grams all-purpose flour
- 210 grams packed brown sugar
- 170 grams softened butter
- 100 grams white sugar
- 15 grams vanilla extract
- 3 grams salt
- 3 grams baking soda
- 2 large eggs | |
1910.03467 | true | null | We conduct two out of the three proposed approaches for Japanese-Vietnamese translation systems and the results are given in the Table TABREF15.
FLOAT SELECTED: Table 1: Results of Japanese-Vietnamese NMT systems
We conduct two out of the three proposed approaches for Japanese-Vietnamese translation systems and the results are given in the Table TABREF15.
FLOAT SELECTED: Table 1: Results of Japanese-Vietnamese NMT systems | Is the supervised morphological learner tested on Japanese? | No. |
1912.06813 | false | null | We conducted our experiments on the CMU ARCTIC database BIBREF33, which contains parallel recordings of professional US English speakers sampled at 16 kHz. One female (slt) was chosen as the target speaker and one male (bdl) and one female (clb) were chosen as sources. We selected 100 utterances each for validation and evaluation, and the other 932 utterances were used as training data. For the TTS corpus, we chose a US female English speaker (judy bieber) from the M-AILABS speech dataset BIBREF34 to train a single-speaker Transformer-TTS model. With the sampling rate also at 16 kHz, the training set contained 15,200 utterances, which were roughly 32 hours long.
We conducted our experiments on the CMU ARCTIC database BIBREF33, which contains parallel recordings of professional US English speakers sampled at 16 kHz. One female (slt) was chosen as the target speaker and one male (bdl) and one female (clb) were chosen as sources. We selected 100 utterances each for validation and evaluation, and the other 932 utterances were used as training data. For the TTS corpus, we chose a US female English speaker (judy bieber) from the M-AILABS speech dataset BIBREF34 to train a single-speaker Transformer-TTS model. With the sampling rate also at 16 kHz, the training set contained 15,200 utterances, which were roughly 32 hours long. | What datasets are experimented with? | The answers are shown as follows:
* the CMU ARCTIC database BIBREF33
* the M-AILABS speech dataset BIBREF34
|
null | false | 144 | Multi-document summarization (MDS), the transformation of a set of documents into a short text containing their most important aspects, is a long-studied problem in NLP. Generated summaries have been shown to support humans dealing with large document collections in information seeking tasks BIBREF0 , BIBREF1 , BIBREF2 . However, when exploring a set of documents manually, humans rarely write a fully-formulated summary for themselves. Instead, user studies BIBREF3 , BIBREF4 show that they note down important keywords and phrases, try to identify relationships between them and organize them accordingly. Therefore, we believe that the study of summarization with similarly structured outputs is an important extension of the traditional task.
A representation that is more in line with observed user behavior is a concept map BIBREF5 , a labeled graph showing concepts as nodes and relationships between them as edges (Figure FIGREF2 ). Introduced in 1972 as a teaching tool BIBREF6 , concept maps have found many applications in education BIBREF7 , BIBREF8 , for writing assistance BIBREF9 or to structure information repositories BIBREF10 , BIBREF11 . For summarization, concept maps make it possible to represent a summary concisely and clearly reveal relationships. Moreover, we see a second interesting use case that goes beyond the capabilities of textual summaries: When concepts and relations are linked to corresponding locations in the documents they have been extracted from, the graph can be used to navigate in a document collection, similar to a table of contents. An implementation of this idea has been recently described by BIBREF12 .
The corresponding task that we propose is concept-map-based MDS, the summarization of a document cluster in the form of a concept map. In order to develop and evaluate methods for the task, gold-standard corpora are necessary, but no suitable corpus is available. The manual creation of such a dataset is very time-consuming, as the annotation includes many subtasks. In particular, an annotator would need to manually identify all concepts in the documents, while only a few of them will eventually end up in the summary.
To overcome these issues, we present a corpus creation method that effectively combines automatic preprocessing, scalable crowdsourcing and high-quality expert annotations. Using it, we can avoid the high effort for single annotators, allowing us to scale to document clusters that are 15 times larger than in traditional summarization corpora. We created a new corpus of 30 topics, each with around 40 source documents on educational topics and a summarizing concept map that is the consensus of many crowdworkers (see Figure FIGREF3 ).
As a crucial step of the corpus creation, we developed a new crowdsourcing scheme called low-context importance annotation. In contrast to traditional approaches, it allows us to determine important elements in a document cluster without requiring annotators to read all documents, making it feasible to crowdsource the task and overcome quality issues observed in previous work BIBREF13 . We show that the approach creates reliable data for our focused summarization scenario and, when tested on traditional summarization corpora, creates annotations that are similar to those obtained by earlier efforts.
To summarize, we make the following contributions: (1) We propose a novel task, concept-map-based MDS (§ SECREF2 ), (2) present a new crowdsourcing scheme to create reference summaries (§ SECREF4 ), (3) publish a new dataset for the proposed task (§ SECREF5 ) and (4) provide an evaluation protocol and baseline (§ SECREF7 ). We make these resources publicly available under a permissive license.
Multi-document summarization (MDS), the transformation of a set of documents into a short text containing their most important aspects, is a long-studied problem in NLP. Generated summaries have been shown to support humans dealing with large document collections in information seeking tasks. However, when exploring a set of documents manually, humans rarely write a fully-formulated summary for themselves. Instead, user studies show that they note down important keywords and phrases, try to identify relationships between them and organize them accordingly. | Why does the author propose the concept-map-based MD? | Because few people give a fully summary when exploring a set of documents in NLP. |
null | false | null | On March 27, 2023, a mass shooting occurred at The Covenant School, a private Presbyterian Church in America parochial school in the Green Hills neighborhood of Nashville, Tennessee. Local resident and former student of the school Aiden Hale (formerly Audrey Elizabeth Hale) killed three children and three adults. Hale, 28, was killed by two Metropolitan Nashville Police Department officers responding to the incident.
Hale was initially reported to be a woman and referred to by his birth name by police. Later, it was confirmed that Hale was a trans man who went by the name Aiden and used he/him pronouns.
Hale drove a Honda Fit to the school, arriving at 9:54 a.m. CDT and parking it in the lot. At 9:57, Hale sent an Instagram message to an old friend saying an earlier post he made was "basically a suicide note" and that he planned to die that day. His friend called a crisis hotline before contacting the Davidson County Sheriff's Office at 10:13.
At 10:11, Hale shot through a set of glass side doors and entered the building. He was armed with two rifles and a pistol. At 10:13, police received a call about an active shooter. He walked across the second floor of the school before opening fire. He fired into several classrooms; no one in the classrooms was injured because the teachers had fortified doors and sheltered the students. The police first arrived at the school at 10:24. A teacher told an officer that the students were in lockdown and two were missing.
Officers entered the building at 10:25 and they searched each room for Hale. They heard gunshots coming from the second floor. Five Metro Nashville police officers went upstairs and saw him in a lobby area, firing through a window at arriving police vehicles. Two of the officers fired four times each killing him at 10:27, 14 minutes after the initial 911 call was made | Please give me a short bulleted list of what happened in the 2023 Covenant School shooting incident | Hale, 28, a former student killed three children and three adults as part of a mass shooting that occurred at The Covenant School, a private Presbyterian Church in America parochial school in the Green Hills neighborhood of Nashville, Tennessee. |
null | false | null | What are the main differences between traditional fishing with a spinning rod or baitcasting rod and fly fishing? | The main differences between the two methods of fishing are the types of lures that are used and in the way the lure is delivered by a cast of the rod. In traditional or spin fishing, the lure itself is weighted and this weight provides the leverage necessary to cast out into the water. In contrast, fly fishing the lures are typically called flies and are very light. Because they are so light, a heavier line is used and the weight of the line is what allows the fly to be cast out and delivered to the water. Spin fishing lures are often measured in ounces of weight and are often made of metal or plastic. They can range in size and design depending on the target species of fish. Fly fishing flies are typically very lightweight hairs, threads, or other materials tied to a hook. The fly fishing line weights are measured in grains. | |
null | false | 200 | Recent studies have shown the vulnerability of ML models to adversarial attacks, small perturbations which lead to misclassification of inputs. Adversarial example generation in NLP BIBREF0 is more challenging than in common computer vision tasks BIBREF1, BIBREF2, BIBREF3 due to two main reasons: the discrete nature of input space and ensuring semantic coherence with the original sentence. A major bottleneck in applying gradient based BIBREF4 or generator model BIBREF5 based approaches to generate adversarial examples in NLP is the backward propagation of the perturbations from the continuous embedding space to the discrete token space.
Recent works for attacking text models rely on introducing errors at the character level in words BIBREF6, BIBREF7 or adding and deleting words BIBREF8, BIBREF9, BIBREF10, etc. for creating adversarial examples. These techniques often result in adversarial examples which are unnatural looking and lack grammatical correctness, and thus can be easily identified by humans.
TextFooler BIBREF11 is a black-box attack, that uses rule based synonym replacement from a fixed word embedding space to generate adversarial examples. These adversarial examples do not account for the overall semantics of the sentence, and consider only the token level similarity using word embeddings. This can lead to out-of-context and unnaturally complex replacements (see Table ), which can be easily identifiable by humans.
The recent advent of powerful language models BIBREF12, BIBREF13 in NLP has paved the way for using them in various downstream applications. In this paper, we present a simple yet novel technique: BAE (BERT-based Adversarial Examples), which uses a language model (LM) for token replacement to best fit the overall context. We perturb an input sentence by either replacing a token or inserting a new token in the sentence, by means of masking a part of the input and using a LM to fill in the mask (See Figure FIGREF1). BAE relies on the powerful BERT masked LM for ensuring grammatical correctness of the adversarial examples. Our attack beats the previous baselines by a large margin and confirms the inherent vulnerabilities of modern text classification models to adversarial attacks. Moreover, BAE produces more richer and natural looking adversarial examples as it uses the semantics learned by a LM.
To the best of our knowledge, we are the first to use a LM for adversarial example generation. We summarize our major contributions as follows:
We propose BAE, a novel strategy for generating natural looking adversarial examples using a masked language model.
We introduce 4 BAE attack modes, all of which are almost always stronger than previous baselines on 7 text classification datasets.
We show that, surprisingly, just a few replace/insert operations can reduce the accuracy of even a powerful BERT-based classifier by over $80\%$ on some datasets.
We propose BAE, a novel strategy for generating natural looking adversarial examples using a masked language model. | What is the BAE (BERT-based Adversarial Examples)? | It is a simple yet novel technique which using a language model (LM) for token replacement to best fit the overall context. |
null | false | 52 | Feature elimination strategies are often taken 1) to remove irrelevant or noisy features, 2) to improve classifier performance, and 3) to reduce training and run times. We conducted an experiment to determine whether we could maintain or improve classifier performances by applying the following three-tiered feature elimination approach:
Reduction We reduced the dataset encoded for each class by eliminating features that occur less than twice in the full dataset.
Selection We iteratively applied Chi-Square feature selection on the reduced dataset, selecting the top percentile of highest ranked features in increments of 5 percent to train and test the support vector model using a linear kernel and 5-fold, stratified cross-validation.
Rank We cumulatively plotted the average F1-score performances of each incrementally added percentile of top ranked features. We report the percentile and count of features resulting in the first occurrence of the highest average F1-score for each class.
All experiments were programmed using scikit-learn 0.18.
The initial matrices of almost 17,000 features were reduced by eliminating features that only occurred once in the full dataset, resulting in 5,761 features. We applied Chi-Square feature selection and plotted the top-ranked subset of features for each percentile (at 5 percent intervals cumulatively added) and evaluated their predictive contribution using the support vector machine with linear kernel and stratified, 5-fold cross validation.
In Figure 2, we observed optimal F1-score performance using the following top feature counts: no evidence of depression: F1: 87 (15th percentile, 864 features), evidence of depression: F1: 59 (30th percentile, 1,728 features), depressive symptoms: F1: 55 (15th percentile, 864 features), depressed mood: F1: 39 (55th percentile, 3,168 features), disturbed sleep: F1: 46 (10th percentile, 576 features), and fatigue or loss of energy: F1: 72 (5th percentile, 288 features) (Figure 1). We note F1-score improvements for depressed mood from F1: 13 at the 1st percentile to F1: 33 at the 20th percentile.
We observed peak F1-score performances at low percentiles for fatigue or loss of energy (5th percentile), disturbed sleep (10th percentile) as well as depressive symptoms and no evidence of depression (both 15th percentile) suggesting fewer features are needed to reach optimal performance. In contrast, peak F1-score performances occurred at moderate percentiles for evidence of depression (30th percentile) and depressed mood (55th percentile) suggesting that more features are needed to reach optimal performance. However, one notable difference between these two classes is the dramatic F1-score improvements for depressed mood i.e., 20 point increase from the 1st percentile to the 20th percentile compared to the more gradual F1-score improvements for evidence of depression i.e., 11 point increase from the 1st percentile to the 20th percentile. This finding suggests that for identifying depressed mood a variety of features are needed before incremental gains are observed.
We applied Chi-Square feature selection and plotted the top-ranked subset of features for each percentile (at 5 percent intervals cumulatively added) and evaluated their predictive contribution using the support vector machine with linear kernel and stratified, 5-fold cross validation. | What does the author use to assess their predictive contribution? | Chi-Square feature selection. |
null | false | null | What is refinancing? | Refinancing is the replacement of an existing debt obligation with another debt obligation under a different term and interest rate. The terms and conditions of refinancing may vary widely by country, province, or state, based on several economic factors such as inherent risk, projected risk, political stability of a nation, currency stability, banking regulations, borrower's credit worthiness, and credit rating of a nation. In many industrialized nations, common forms of refinancing include primary residence mortgages and car loans. | |
null | false | 46 | The used architecture of the QA model is briefly summarized below. Here we choose QANet BIBREF2 as the base model due to the following reasons: 1) it achieves the second best performance on SQuAD, and 2) since there are completely no recurrent networks in QANet, its training speed is 5x faster than BiDAF BIBREF17 when reaching the same performance on SQuAD.
The network architecture is illustrated in Figure FIGREF2 . The left blocks and the right blocks form two QANets, each of which takes a document and a question as the input and outputs an answer. In QANet, firstly, an embedding encoder obtains word and character embeddings for each word in INLINEFORM0 or INLINEFORM1 and then models the temporal interactions between words and refines word vectors to contextualized word representations. All encoder blocks used in QANet are composed exclusively of depth-wise separable convolutions and self-attention. The intuition here is that convolution components can model local interactions and self-attention components focus on modeling global interactions. The context-query attention layer generates the question-document similarity matrix and computes the question-aware vector representations of the context words. After that, a model encoder layer containing seven encoder blocks captures the interactions among the context words conditioned on the question. Finally, the output layer predicts a start position and an end position in the document to extract the answer span from the document.
All encoder blocks used in QANet are composed exclusively of depth-wise separable convolutions and self-attention. | What do the encoder blocks used in QANet compose of? | All encoder blocks used in QANet are composed exclusively of depth-wise separable convolutions and self-attention. |
null | false | null | Give me a list of the lines on the London Underground | - Bakerloo
- Central
- Circle
- District
- Hammersmith & City
- Jubilee
- Metropolitan
- Northern
- Piccadilly
- Victoria
- Waterloo & City | |
null | false | 513 | The advantage of SLASH lies in the efficient integration of neural, probabilistic and symbolic computations. To emphasize this, we conduct a variety of experimental evaluations.
Experimental Details. We use two benchmark data sets, namely for the task of MNIST-Addition and a variant of the ShapeWorld data set for For ShapeWorld experiments, we generate a data set we refer to as ShapeWorld4. Images of ShapeWorld4 contain between one and four objects, with each object consisting of four attributes: a color (red, blue, green, gray, brown, magenta, cyan or yellow), a shade (bright, or dark), a shape (circle, triangle or square) and a size (small or big). Thus, each object can be created from 84 different combinations of attributes. Fig. depicts an example image.
We measure performance via classification accuracies in the MNIST-Addition task. In our Shape-World4 experiments, we present the average precision. We refer to appendix B for the SLASH programs and queries of each experiment, and appendix C for a detailed description of hyperparameters and further details.
Evaluation 1: SLASH outperforms SOTA DPPLs in MNIST-Addition. The task of MNIST-Addition is to predict the sum of two MNIST digits, presented only as raw images. During test time, however, a model should classify the images directly. Thus, although a model does not receive explicit information about the depicted digits, it must learn to identify digits via indirect feedback on the sum prediction.
We compare the test accuracy after convergence between the three DPPLs: DeepProbLog, NeurASP and SLASH, using a probabilistic circuit (PC) or a deep neural network (DNN) as NPP. Notably, the DNN used in SLASH (DNN) is the LeNet5 model of DeepProbLog and NeurASP. We note that when using the PC as NPP, we have also extracted conditional class probabilities P (C|X), by marginalizing the class variables C to acquire the normalization constant P (X) from the joint P (X, C), and calculating P (X|C).
The results can be seen in Tab. 1a. We observe that training SLASH with a DNN NPP produces SOTA accuracies compared to DeepProbLog and NeurASP, confirming that SLASH's batch-wise loss computation leads to improved performances. We further observe that the test accuracy of SLASH with a PC NPP is slightly below the other DPPLs, however we argue that this may be since a PC, in comparison to a DNN, is learning a true mixture density rather than just conditional probabilities. The advantages of doing so will be investigated in the next experiments. Note that, optimal architecture search for PCs, e.g. for computer vision, is an open research question.These evaluations show SLASH's advantages on the benchmark MNIST-Addition task. Additional benefits will be made clear in the following experiments.
Evaluation 2: Handling Missing Data with SLASH. SLASH offers the advantage of its flexibility to use various kinds of NPPs. Thus, in comparison to previous DPPLs, one can easily integrate NPPs into SLASH that perform joint probability estimation. For this evaluation, we consider the task of MNIST-Addition with missing data. We trained SLASH (PC) and DeepProbLog with the MNIST-Addition task with images in which a percentage of pixels per image has been removed. It is important to mention here that whereas DeepProbLog handles the missing data simply as background pixels, SLASH (PC) specifically models the missing data as uncertain data by marginalizing the denoted pixels at inference time. We use DeepProbLog here representative of DPPLs without true density estimation. The results can be seen in Tab. 1b for 50%, 80%, 90% and 97% missing pixels per image. We observe that at 50%, DeepProbLog and SLASH produce almost equal accuracies. With 80% percent missing pixels, there is a substantial difference in the ability of the two DPPLs to correctly classify images, with SLASH being very stable. By further increasing the percentage of missing pixels, this difference becomes even more substantial with SLASH still reaching a 82% test accuracy even when 97% of the pixels per image are missing, whereas DeepProbLog degrades to an average of 32% test accuracy. We further note that SLASH, in comparison to DeepProbLog, produces largely reduced standard deviations over runs.
Thus, by utilizing the power of true density estimation SLASH, with an appropriate NPP, can produce more robust results in comparison to other DPPLs. Further, we refer to Appendix D, which contains results of additional experiments where training is performed with the full MNIST data set whereas only the test set entails different rates of missing pixels.
Evaluation 3: Improved Concept Learning via SLASH. We show that SLASH can be very effective for the complex task of set prediction, which previous DPPLs have not tackled. We revert to the ShapeWorld4 data set for this setting. For set prediction, a model is trained to predict the discrete attributes of a set of objects in an image (cf. Fig. for an example ShapeWorld4 image). The difficulty for the model lies therein that it must match an unordered set of corresponding attributes (with varying number of entities over samples) with its internal representations of the image.
The slot attention module introduced by allows for an attractive object-centric approach to this task. Specifically, this module represents a pluggable, differentiable module that can be easily added to any architecture and, through a competitive softmax-based attention mechanism, can enforce the binding of specific parts of a latent representation into permutation-invariant, taskspecific vectors, called slots.
In our experiments, we wish to show that by adding logical constraints to the training setting, one can improve the overall performances and generalization properties of such a model. For this, we train SLASH with NPPs as depicted in Fig. consisting of a shared slot encoder and separate PCs, each modelling the mixture of latent slot variables and the attributes of one category, e.g. color.
For ShapeWorld4, we thereby have altogether four NPPs. SLASH is trained via queries of the kind exemplified in Fig. in the Appendix. We refer to this configuration as SLASH Attention.
We compare SLASH Attention to a baseline slot attention encoder using an MLP and Hungarian loss for predicting the object properties from the slot encodings as in. The results of these experiments can be found in Fig. (top). We observe that the average precision after convergence on the held-out test set with SLASH Attention is greatly improved to that of the baseline model. Additionally, in Fig. we observe that SLASH Attention reaches the average precision value of the baseline model in much fewer number of epochs. Thus, we can summarize that adding logical knowledge in the training procedure via SLASH can greatly improve the capabilities of a neural module for set prediction.
Evaluation 4: Improved Compositional Generalization with SLASH. To test the hypothesis that SLASH Attention possesses improved generalization properties in comparison to the baseline model, we ran experiments on a variant of ShapeWorld4 similar to the CLEVR Compositional Generalization Test (CoGenT). The goal of CoGenT is to investigate a model's ability to handle novel combinations of attributes that were not seen during training.
For this purpose, we established two conditions within a ShapeWorld4 CoGenT data set: Condition (A) -the training and test data set contains squares with the colors gray, blue, brown, or yellow, triangles with the colors red, green, magenta, or cyan and circles of all colors. Condition (B) -the training set is as in Condition (A). However, the test set contains squares with the colors red, green, magenta, or cyan, triangles with the colors gray, blue, brown, or yellow and circles of all colors. The goal is to investigate how well a model can generalize that, e.g., also squares can have the color red, although never having seen evidence for this during training.
The resulting average precision test scores are presented in Fig. (bottom). We observe that, even though the SLASH Program used for this experiment was not explicitly written to handle composition generalization, SLASH Attention shows greatly improved generalization capabilities. This can be seen in the approx. 13% higher average precision scores on the Condition (B) test set in comparison to the baseline model. Importantly, this trend still holds even when subtracting the higher precision scores observed in Condition (A).
To summarize our findings from the experiments on set prediction: we observe that adding prior knowledge in the form of logical constraints via SLASH can greatly improve a neural module in terms of performance and generalizability. On a side note: training neural networks for novel tasks, often involves defining explicit loss functions, e.g. Hungarian loss for set prediction. In contrast with SLASH, no matter the choice of NPP and underlying task, the training loss remains the same. Task-related requirements simply need to be added as lines of code to the SLASH program. This additionally highlights SLASH's versatility and flexibility.
The results can be seen in Tab. 1a. We observe that training SLASH with a DNN NPP produces SOTA accuracies compared to DeepProbLog and NeurASP, confirming that SLASH’s batch-wise loss computation leads to improved performances. | SLASH’s batch-wise loss -> confirming? | If we understand your question correctly, yes SLASH's loss is batch-wise. SLASH's batch-wise loss paves the way for the parallelization of the ASP-calls and so the scalability necessary for Set Prediction Task being performed in reasonable time. |
null | false | null | Where can I buy leather pants for women? | You can find leather pants at a variety of retailers. For lower cost options, try stores like Zara, H&M, Old Navy, and Gap. If you have a higher price point, try stores like Nordstrom, Macy's, Abercrombie & Fitch, or online retailers Revolve or Lulus. | |
null | false | null | How did Andy Warhol create the "piss paintings"? | Andy Warhol created paintings in the 1970s by covering canvases with copper paint and having various staffers urinate on them. This created different colored splotches, pools, and lines. | |
null | false | null | Why do people live in homes? | People live in homes because its a way to protect you from the elements of nature like storms, heavy rain, lightning, heat and humidity. Its a place that offers comfort with air conditioning, offers a comfortable place to sleep, and offers a comfortable place to cook and eat and have entertaining activities like television, and spending time with your family. | |
1912.00871 | false | null | We compare medium-sized, small, and minimal networks to show if network size can be reduced to increase training and testing efficiency while retaining high accuracy. Networks over six layers have shown to be non-effective for this task. We tried many configurations of our network models, but report results with only three configurations of Transformers.
Transformer Type 1: This network is a small to medium-sized network consisting of 4 Transformer layers. Each layer utilizes 8 attention heads with a depth of 512 and a feed-forward depth of 1024.
Transformer Type 2: The second model is small in size, using 2 Transformer layers. The layers utilize 8 attention heads with a depth of 256 and a feed-forward depth of 1024.
Transformer Type 3: The third type of model is minimal, using only 1 Transformer layer. This network utilizes 8 attention heads with a depth of 256 and a feed-forward depth of 512.
We tried many configurations of our network models, but report results with only three configurations of Transformers.
Transformer Type 1: This network is a small to medium-sized network consisting of 4 Transformer layers. Each layer utilizes 8 attention heads with a depth of 512 and a feed-forward depth of 1024.
Transformer Type 2: The second model is small in size, using 2 Transformer layers. The layers utilize 8 attention heads with a depth of 256 and a feed-forward depth of 1024.
Transformer Type 3: The third type of model is minimal, using only 1 Transformer layer. This network utilizes 8 attention heads with a depth of 256 and a feed-forward depth of 512. | What neural configurations are explored? | The answers are shown as follows:
* tried many configurations of our network models, but report results with only three configurations
* Transformer Type 1
* Transformer Type 2
* Transformer Type 3
|
null | false | null | Scrubs focuses on the unique point of view of its main character and narrator, Dr. John Michael "J.D." Dorian (Zach Braff) for the first eight seasons, with season nine being narrated by the new main character Lucy Bennett (Kerry Bishé). Most episodes feature multiple story lines thematically linked by voice-overs done by Braff, as well as the comical daydreams of J.D. According to Bill Lawrence, "What we decided was, rather than have it be a monotone narration, if it's going to be Zach's voice, we're going to do everything through J.D.'s eyes. It opened up a visual medium that those of us as comedy writers were not used to." Actors were given the chance to improvise their lines on set with encouragement by series creator Bill Lawrence, with Neil Flynn and Zach Braff being the main improvisors. | What was unique about Scrubs? | Most episodes featured voice-overs done by Zach Braff (who stars as the main character, J.D.), as well as the comical daydreams of J.D. This opened up a unique visual medium that the comedy writers were not used to. Also, actors were given the chance to improvise their lines on set. |
null | false | null | What is a hunting dog? | Hunting dogs are usually paired with a human to help in the sport of hunting. They have been used for hundreds of years and are considered to improve the hunters abilities. Hunting dogs are typically pointers or retrievers. There are many breeds, such as German Shorthaired Pointers and labradors just to name a couple. | |
null | false | null | Name some female characters from "The X-Files." | Dana Scully, Marita Covarrubias, Melissa Scully, Margaret Scully, Leyla Harrison, Teena Mulder, Samantha Mulder, Diana Fowley, Monica Reyes, Cassandra Spender | |
null | false | null | Write a story about an ice fishing trip in Minnesota. Make sure to include what you need for a successful trip. | Ah, winter! It’s time to put on your long underwear. Slip in some wool socks and a sweater. Climb into your snowpants. Pull down your stocking cap. Grab your fi shing rod and bait bucket.
There’s ice on the lake, and it’s the season to enjoy a true Minnesota adventure, —ice fishing. Don’t worry about getting cold or bored on a frozen lake. Ice fishing is both easy and exciting. It’s fun to hike across the ice imagining hungry sunnies or walleyes lurking below. It’s an adventure to hang out around an ice hole with friends and family, telling stories and holding a funny looking fishing rod as you wait for a bite. And it’s thrilling when your bobber suddenly vanishes down the hole, and you pull a slippery fish from the water with a splash. So grab a grown-up, a Thermos of hot cocoa, and get ready for an ice fishing adventure.
Start with a visit to your local bait store or DNR Fisheries office. Folks there can tell you in which lakes the fish are biting and where you can get onto the lake. They can also tell you where the ice is most likely to be OK. Wind, warm weather, underwater springs, and currents can all make ice unsafe. Ice must be at least 4 inches thick before you walk on it. (See Be Safe, page 45.) Once you know the ice is thick enough, you can go find a fishing spot. Here are three ways to look: § If you know where fish hang out in summer, go there. Fish usually go to the same places in winter. § Pick up a lake map at the bait shop or DNR and look for shallow areas or drop-offs (where the bottom gets deep quickly). Fish are more likely to be there. § Look for anglers congregated at the best fishing holes. Ask if you can fish near them. (It’s not polite to drill holes too close to other anglers.) If the fish aren’t biting in one spot, try another.
You can use a regular reel, or some anglers use a device called a tip-up instead. A tipup has two sticks. One lies across the hole. The other points down into the hole and has a spool with line. When a fish takes your bait, a flag springs up from the stick across the hole. Then you pull the fish up with the line. Tip-ups are fun because you can watch them while reading a book or tossing a Frisbee.
1. Cut a Hole.
a. To catch fish through the ice, you must first drill a hole. To drill a hole, ice anglers use a tool called an auger, which looks like a giant screw with a sharp blade on the end. Another handy tool is a spud, a long-handled chisel with a sharp blade for checking ice thickness and chipping extra ice from the hole. Anglers use a scoop, a big spoon with holes, to clean out ice shavings.
2. Find the Depth
a. If you know the depth of the water, you have a better idea what fish to fish for. Bluegills and northerns like shallow water. Look for walleyes in deeper water. Some anglers use an electronic fish locator or a lead weight on a hook. You can also tie a string to a weight, hold the loose end in one hand, and drop the weigh to the bottom. Then measure the length of string that’s down the hole. The simplest solution is to use a weighted hook, which drops to the bottom. Then you reel it up to fish as far off the bottom as you like.
3. Land your Catch
a. When you feel a fish on the line, reel it up steadily but not too fast because you might yank out the hook. When you see the fish, ask your adult fishing buddy to lift it out. After landing a fish, remove it from the hook. It’s easier to get a hook out with pliers than with your hands in cold weather. If the fish is too big or too small to keep, return it to the water. Or put your catch in a bucket or a snowbank. Watch out for dogs! A dog might try to run away with your catch! It happens.
4. Use Good Manners
a. Be polite and don’t disturb your fellow anglers with loud talk or goofing around. Always pick up your trash and anything else you brought. Remember: Anything you leave on the lake will go into the water when the ice melts. | |
null | false | null | What is MATLAB? | MATLAB is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages. | |
null | false | null | Is it easier to travel with a baby or a toddler? | Most people have found that traveling is a baby is much easier. This is largely because they are not yet fully mobile and haven't learned to walk or run just yet. Babies are considered infants under the age of 1. Traveling with a baby or a toddler still requires a lot of "stuff", but babies are typically more easily entertained by everyday random objects, such as a cup, or a napkin, etc, whereas toddlers are further along in the developmental journey and need more stimulation, especially on a long journey. Babies are also small enough to wear in a baby carrier, leaving parents hands-free to be able to juggle the copious amounts of luggage and toys needed when traveling with little ones. Babies are typically easier to travel with on planes as they still sleep for more hours than they're typically awake. According to many studies, the ideal age to travel with children is between 4 months to a year. | |
null | false | null | What are four reasons why you have a car on your front lawn? | 1. I ran out of parking in my driveway
2. The car drips oil and the grass absorbs the oil better than the concrete driveway
3. It dropped from the sky and I don't have the money to get it removed
4. My wife often kicks me out of the house and I use it to sleep in | |
null | false | 173 | Encoding of words is perhaps the most important step towards a successful end-to-end natural language processing application. Although word embeddings have been shown to provide benefit to such models, they commonly treat words as the smallest meaning bearing unit and assume that each word type has its own vector representation. This assumption has two major shortcomings especially for languages with rich morphology: (1) inability to handle unseen or out-of-vocabulary (OOV) word-forms (2) inability to exploit the regularities among word parts. The limitations of word embeddings are particularly pronounced in sentence-level semantic tasks, especially in languages where word parts play a crucial role. Consider the Turkish sentences “Köy+lü-ler (villagers) şehr+e (to town) geldi (came)” and “Sendika+lı-lar (union members) meclis+e (to council) geldi (came)”. Here the stems köy (village) and sendika (union) function similarly in semantic terms with respect to the verb come (as the origin of the agents of the verb), where şehir (town) and meclis (council) both function as the end point. These semantic similarities are determined by the common word parts shown in bold. However ortographic similarity does not always correspond to semantic similarity. For instance the ortographically similar words knight and night have large semantic differences. Therefore, for a successful semantic application, the model should be able to capture both the regularities, i.e, morphological tags and the irregularities, i.e, lemmas of the word.
Morphological analysis already provides the aforementioned information about the words. However access to useful morphological features may be problematic due to software licensing issues, lack of robust morphological analyzers and high ambiguity among analyses. Character-level models (CLM), being a cheaper and accessible alternative to morphology, have been reported as performing competitively on various NLP tasks BIBREF0 , BIBREF1 , BIBREF2 . However the extent to which these tasks depend on morphology is small; and their relation to semantics is weak. Hence, little is known on their true ability to reveal the underlying morphological structure of a word and their semantic capabilities. Furthermore, their behaviour across languages from different families; and their limitations and strengths such as handling of long-range dependencies, reaction to model complexity or performance on out-of-domain data are unknown. Analyzing such issues is a key to fully understanding the character-level models.
To achieve this, we perform a case study on semantic role labeling (SRL), a sentence-level semantic analysis task that aims to identify predicate-argument structures and assign meaningful labels to them as follows:
$[$ Villagers $]$ comers came $[$ to town $]$ end point
We use a simple method based on bidirectional LSTMs to train three types of base semantic role labelers that employ (1) words (2) characters and character sequences and (3) gold morphological analysis. The gold morphology serves as the upper bound for us to compare and analyze the performances of character-level models on languages of varying morphological typologies. We carry out an exhaustive error analysis for each language type and analyze the strengths and limitations of character-level models compared to morphology. In regard to the diversity hypothesis which states that diversity of systems in ensembles lead to further improvement, we combine character and morphology-level models and measure the performance of the ensemble to better understand how similar they are.
We experiment with several languages with varying degrees of morphological richness and typology: Turkish, Finnish, Czech, German, Spanish, Catalan and English. Our experiments and analysis reveal insights such as:
To achieve this, we perform a case study on semantic role labeling (SRL), a sentence-level semantic analysis task that aims to identify predicate-argument structures and assign meaningful labels to them as follows: $[$ Villagers $]$ comers came $[$ to town $]$ end point | What is the aim of performing a case study on semantic role labeling (SRL)? | It aims to identify predicate-argument structures and assign meaningful labels to them. |
null | false | null | What is one of your favorite hobbies? | I love playing with my children. I have a 5 year old son and a 2 year old daughter, both of whom are an absolute delight. My son enjoys reading, exploring, and general horseplay, while my daughter enjoys pretending all of her toys are families. It's a lot of fun coming home every day and spending time with those two. | |
null | false | null | A lenticular galaxy (denoted S0) is a type of galaxy intermediate between an elliptical (denoted E) and a spiral galaxy in galaxy morphological classification schemes. It contains a large-scale disc but does not have large-scale spiral arms. Lenticular galaxies are disc galaxies that have used up or lost most of their interstellar matter and therefore have very little ongoing star formation. They may, however, retain significant dust in their disks. As a result, they consist mainly of aging stars (like elliptical galaxies). Despite the morphological differences, lenticular and elliptical galaxies share common properties like spectral features and scaling relations. Both can be considered early-type galaxies that are passively evolving, at least in the local part of the Universe. Connecting the E galaxies with the S0 galaxies are the ES galaxies with intermediate-scale discs. | Please give me a short bulleted list of the characteristics of lenticular galaxies. | - Consist primarily of aging stars
- Are considered early-type galaxies
- Does not have large-scale spiral arms
- Has a large scale disc |
null | false | 267 | In our evaluation of the proposed scheme, each classifier is implemented as a deep learning model having four layers, as illustrated in Figure FIGREF16 , and is described as follows:
The Input (a.k.a Embedding) Layer. The input layer's size is defined by the number of inputs for that classifier. This number equals the size to the word vector plus the number of additional features. The word vector dimension was set to 30 so that to be able to encode every word in the vocabulary used.
The hidden layer. The sigmoid activation was selected for the the hidden LSTM layer. Based on preliminary experiments the dimensionality of the output space for this layer was set to 200. This layer is fully connected to both the Input and the subsequent layer.
The dense layer. The output of the LSTM was run through an additional layer to improve the learning and obtain more stable output. The ReLU activation function was used. Its size was selected equal to the size of the input layer.
The output layer. This layer has 3 neurons to provide output in the form of probabilities for each of the three classes Neutral, Racism, and Sexism. The softmax activation function was used for this layer.
In total we experimented with 11 different setups of the proposed scheme, each with a different ensemble of classifiers, see Table TABREF17 .
The output layer. This layer has 3 neurons to provide output in the form of probabilities for each of the three classes Neutral, Racism, and Sexism. The softmax activation function was used for this layer. | What function was used for the output layer? | The softmax activation function. |
null | false | null | Count de Grisly was the first to perform what trick in 1799 | Saw woman in half | |
null | false | null | In the 1994–95 season, Jágr won his first Art Ross Trophy after finishing the regular season with the most points in the NHL; he tied Eric Lindros with 70 points but won based on his 32 goals to Lindros' 29. The next year, Jágr set a record for most points, 149, by a European-born player. Both his 62 goals and 87 assists from that season still stand as career-highs. His 1995–96 totals for assists and points stand as the records for right-wingers in those categories. After the 1997–98 season, Penguins captain Ron Francis signed with the Carolina Hurricanes, leaving Jágr the Penguins' captaincy. From 1997–98 to 2000–01, Jágr would win four-straight NHL scoring titles. In 1999, he would win the Hart Memorial Trophy as the NHL's most valuable player, as well as the Lester B. Pearson Award. In 1998, he led the Czech Republic to a gold medal at the 1998 Winter Olympics. | How many points did Jaromír Jágr score in the 1995-1996 season? | In 1995-1996 season, Jaromír Jágr scored 149 points, which set the record for the most points by a European born player. |
null | false | null | What is a kite? | A kite is a toy that can be be purchased in a store or a specialty shop. It is a craft that is lighter an air that is tied to a string that you can fly on wind currents. The kite has wings that react against the air to create lift enabling it to fly. You can fly a kite at a park or an open area, and you will want to make sure there are no power lines nearby to be safe when flying the kite. | |
null | false | null | Tell me whether these people are comedians or athletes: Chris Rock, Kevin Hart, Tom Segura, Joe Rogan, John Rahm, Zach Ertz, Kyler Murray | Chris Rock: Comedian
Kevin Hart: Comedian
Tom Segura: Comedian
Joe Rogan: Comedian
John Rahm: Athlete
Zach Ertz: Athlete
Kyler Murray: Athlete | |
null | false | null | The British Empire was composed of the dominions, colonies, protectorates, mandates, and other territories ruled or administered by the United Kingdom and its predecessor states. | what was the british empire | The British Empire was composed of the dominions, colonies, protectorates, mandates, and other territories ruled or administered by the United Kingdom and its predecessor states. It began with the overseas possessions and trading posts established by England between the late 16th and early 18th centuries. At its height it was the largest empire in history and, for over a century, was the foremost global power. By 1913, the British Empire held sway over 412 million people, 23 per cent of the world population at the time, and by 1920, it covered 35.5 million km2 (13.7 million sq mi), 24 per cent of the Earth's total land area. As a result, its constitutional, legal, linguistic, and cultural legacy is widespread. At the peak of its power, it was described as "the empire on which the sun never sets", as the Sun was always shining on at least one of its territories. |
null | false | 130 | We evaluate two types of baselines to obtain an impression of the difficulty of the task.
We evaluate four baselines to obtain an impression of the difficulty of the task. | How many baselines does the paper evaluate? | Four baselines. |
null | false | null | Who were the "Big Four" of Grunge? | Alice in Chains, Soundgarden, Nirvana, and Pearl Jam. | |
null | false | null | Who is Michael Jordan? | Michael Jordan (not to be confused with Michael B Jordan, the actor), is widely considered the best basketball player of all time. He played college basketball at the University of North Carolina at Chapel Hill and then was drafted by the Chicago Bulls of the NBA. In the 1990s, Jordan’s Bulls teams won six championships, interrupted by his temporary retirement, during which he played professional baseball. After retiring from basketball, Jordan became a co-owner of the Charlotte Bobcats. Early in his basketball career, Jordan signed an unprecedented sponsorship deal with Nike to make his own branded “Air Jordan” high-top shoes, the largest of a series of high profile endorsements that has enabled him to become a billionaire. | |
null | false | null | Write an email to a landlord asking to replace a kitchen sink | Hi Landlord,
Hope this finds you well. We’ve been thinking about upgrading the kitchen sink (https://www.costco.com/.product.100689380.html). If we were to buy this, would you have a guy who could install it? And depending on the cost we could split the difference?
Let me know if that’s something you’re open to!
Best,
Will | |
2002.08899 | false | null | FLOAT SELECTED: Table 2: Results for artificial Wernicke’s and Broca’s aphasia induced in the LLA-LSTM model. Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information. The inputs that we present are arbitrarily chosen, subject to the constraints listed in the text. Mean precision (Prec.) results on the test sets are also provided to demonstrate corpus-level results. An ellipses represents the repetition of the preceding word at least 1000 times.
FLOAT SELECTED: Table 2: Results for artificial Wernicke’s and Broca’s aphasia induced in the LLA-LSTM model. Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information. The inputs that we present are arbitrarily chosen, subject to the constraints listed in the text. Mean precision (Prec.) results on the test sets are also provided to demonstrate corpus-level results. An ellipses represents the repetition of the preceding word at least 1000 times. | How do they damage different neural modules? | Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information. |
null | false | null | How do I obtain help for an adult relative with a serious mental illness? | In order to obtain assistance, it is critical that the individual receive a medical diagnosis from a professional mental health provider. For adults, this is complicated by the fact that every adult individual has the right to refuse medical care. A person suffering from a severe mental illness will often refuse care due to their delusions and paranoia. Frequently, concerned family members must wait until the relative has a serious psychotic episode at which time they can contact emergency services and have the individual evaluated as a threat to themselves or others. At this point, it might be possible for the government to compel the individual to be diagnosed and receive mental health treatment. | |
null | false | null | Why is Football a popular spectator sport? | Football is an exciting game that is fun for people of all ages to enjoy. Many fans of the game follow a team that is associated with a school or town/city. Football is a fast moving game where players run, tackle, pass and kick a ball on a 100yd long grass or turf (fake grass) field. Referees make sure players follow the rules and if not, they stop the game and penalize the team which committed a foul/penalty. College football is very popular as well as professional football. The NFL (National Football League) is the league with the best professional players. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.