paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| null |
Classify the below as food or drinks.
Fish, Meat, Salad, Apple, Lemonade, Coffee, Tea
|
Food - Fish, Meat, Salad, Apple
Drinks - Lemonade, Coffee, Tea
|
|
null | false
| null |
Which of the following are US presidents and which are Hollywood actors: Jimmy Carter, George Washington, Denzel Washington, Daniel Day Lewis, Abraham Lincoln, and Ronald Reagan.
|
The presidents are: George Washington, Jimmy Carter, Abraham Lincoln, and Ronald Reagan. The Hollywood actors are: Daniel Day Lewis, Denzel Washington, and Ronald Reagan, who was an actor prior to becoming president.
|
|
null | false
| null |
From 1984 to 1989 Stryk studied mathematics and computer science at the Technical University of Munich. In 1994 he received his doctorate in mathematics and then habilitated at the university. He was then postdoctoral researcher at TU Munich. Since 2000 he is professor of simulation, system optimization and robotics at the department of computer science of the Technische Universität Darmstadt. From April 2011 to March 2013 he was dean of the department. He was visiting professor and lecturer at the University of California, San Diego, and the Universidade Estadual de Campinas, Brazil. Stryk is vice president of Robocup. Since 2018 he has been building the German Rescue Robotics Centre.
|
What did Stryk study at the Technical University of Munich?
|
Stryk studied mathematics and computer science at the Technical University of Munich.
|
null | false
| null |
Different uses for glass
|
There are many but here are just a few.
-Bulbs
-Vases
-Windows
-Drinking glass
-Screens
-Plates
-Candles holder
-Water Bottles
-table top
-French Door Glass
-Slippers (if you are a cinderella fan)
-Ash tray
-coasters
-Cooking pots
|
|
null | false
| null |
When was the Roman empire founded?
|
The Roman empire was founded after the Republican period in 27BC and went through 1453AD. Some sources claim that the Roman Empire ended in 476AD with the fall of Rome via the Barbarian leader, Odoacer, however this is not true. The Roman empire continued on through the Eastern half of the Roman Empire. The Eastern Roman Empire is commonly referred to as the Byzantines. However, the Byzantines is a modern creation of referring to this Eastern Empire. In fact, all of the way through 1453AD, the people of the Eastern Roman Empire referred to themselves as Romans.
|
|
null | false
| null |
Alongside the facelift model, Subaru released a new model called the Forester Wilderness for the North American market. It is intended to be a more off-road-oriented version of the Forester, and is positioned between the Limited and Touring trim levels in the Forester lineup. Visually, the Forester has more body cladding, and on the inside, copper-colored accents. It features a 0.5 inches (13 mm) lift for a total of 9.2 inches (230 mm) of ground clearance, the Wilderness also has all-terrain tires, a modified final drive ratio, and a stronger roof rack than previous models. Subaru's StarTex water-repellent seats are also standard.
The Wilderness is powered by the same 2.5-liter FB25D naturally-aspirated four cylinder engine powering other North American market Foresters. The Wilderness uses an Enhanced Dual-Function X-Mode version. The X-Mode is upgraded and re-tuned, with settings for snow, dirt, deep snow and mud. Additional wheel-slip is allowed to occur, which provides an advantage in harsh driving conditions.
|
What's the main benefits of a Subaru Forester Wilderness?
|
The Forester Wilderness provides 0.5 inches (13 mm) lift that increases ground clearance to 9.2 inches (230mm).
The vehicle comes with all-terrain tires, modified final drive ratio, stronger roof rack, and water-repellent seats.
The engine size is the same, but it includes an enhanced dual-function x-mode with additional wheel-slip allowance to handle harsh driving conditions in snow, dirt, deep snow, and mud.
|
null | false
| null |
Why is it important to vote?
|
Voting is a fundamental right and responsibility for citizens of any country. It helps citizens participate in democracy and influence key policies and decisions that influence everyone;s life
|
|
null | false
| null |
David Leatherbarrow is Professor of Architecture and Chair of the Graduate Group in Architecture at the University of Pennsylvania School of Design, Philadelphia, where he has taught since 1984. He received his B.Arch. from the University of Kentucky and holds a Ph.D. in Art from the University of Essex. He has also taught in England, at Cambridge University and the University of Westminster (formerly the Polytechnic of Central London).
|
Where did David Leatherbarrow receive his PhD from?
|
University of Essex
|
null | false
| null |
The series features a number of recurring character archetypes. Most famously, every game since Final Fantasy II, including subsequent remakes of the original Final Fantasy, features a character named Cid. Cid's appearance, personality, goals, and role in the game (non-playable ally, party member, villain) vary dramatically. However, two characteristics many versions of Cid have in common are being a scientist or engineer, and being tied in some way to an airship the party eventually acquires. Every Cid has at least one of these two traits.
Biggs and Wedge, inspired by two Star Wars characters of the same name, appear in numerous games as minor characters, sometimes as comic relief. The later games in the series feature several males with effeminate characteristics. Recurring creatures include Chocobos, Moogles, and Cactuars. Chocobos are large, often flightless birds that appear in several installments as a means of long-distance travel for characters. Moogles are white, stout creatures resembling teddy bears with wings and a single antenna. They serve different roles in games including mail delivery, weaponsmiths, party members, and saving the game. Cactuars are anthropomorphic cacti with haniwa-like faces presented in a running or dashing pose. They usually appear as recurring enemy units, and also as summoned allies or friendly non-player characters in certain titles. Chocobo and Moogle appearances are often accompanied by specific musical themes that have been arranged differently for separate games
|
From the paragraph provided, extract the recurring characters from Final Fantasy. Separate them with a comma.
|
Cid, Biggs, Wedge, Chocobos, Moogles, Cactuars
|
null | false
| null |
What is impala?
|
The impala or rooibok is a medium-sized antelope found in eastern and southern Africa.
|
|
null | false
| null |
What is dopamine?
|
A chemical released by the brain to feel good. Relieves pain.
|
|
null | false
| null |
The 2022 Booker Prize was a literary award given for the best English novel of the year. It was announced on 17 October 2022, during a ceremony hosted by Sophie Duker at the Roundhouse in London. The longlist was announced on 26 July 2022. The shortlist was announced on 6 September. Leila Mottley, at 20, was the youngest longlisted writer to date, and Alan Garner, at 87, the oldest. The majority of the 13 titles were from independent publishers. The prize was awarded to Shehan Karunatilaka for his novel, The Seven Moons of Maali Almeida, receiving £50,000. He is the second Sri Lankan to win the prize, after Michael Ondaatje
|
Please write some key points on Booker prize from the given passage.
|
1. The 2022 Booker Prize was a literary honour presented to the year's top English novel.
2. It was revealed on October 17, 2022, at a ceremony Sophie Duker presided over at London's Roundhouse.
3. On July 26, 2022, the longlist was announced.
4. On September 6th, the shortlist was announced.
5. Alan Garner, aged 87, was the oldest longlisted author to date, and Leila Mottley, at 20 years old, was the youngest.
6. Independent publ ishers produced the majority of the 13 books.
7. Shehan Karunatilaka received the award and £50,000 for his book The Seven Moons of Maali Almeida.
8. After Michael Ondaatje, he is the second Sri Lankan to win the award.
|
null | false
| null |
What are the words of House Greyjoy?
|
"We Do Not Sow"
|
|
null | false
| null |
What’s the best way to cook scrambled eggs?
|
Heat a pan to medium heat and melt some butter. Crack eggs into the pan and break the yolk so that the mixture in the pan has a nice variation. Continue slowly stirring the eggs. Every 30 seconds, remove the pan from the heat. Then return to the heat for 30 seconds. Continue this process and continue stirring until the eggs reach your preferred consistency.
|
|
1907.08501
| false
| null |
For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question. The detailed answers by the participants and available online.
For DQA four participants answered each question, therefore we took the average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values over the four evaluators as the result per question.
|
How do they measure performance?
|
The answers are shown as follows:
* average INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 values
|
null | false
| null |
Ways to use Pencil other than writing
|
Use Pencil to play truth or Dare
Use pencil as darts over a styrofoam target
Use pencil as chopsticks
Use pencil as a pointer
Use pencil to block small holes
|
|
null | false
| null |
Who are some famous Greek philosophers?
|
Plato, Aristotle and Socrates are all famous Greek philosophers
|
|
null | false
| null |
Which characters belong to DC or Marvel Universe? Daredevil, Robin
|
Robin is DC, Daredevil is Marvel
|
|
null | false
| null |
Sushi (すし, 寿司, 鮨, 鮓, pronounced [sɯɕiꜜ] or [sɯꜜɕi]) is a Japanese dish of prepared vinegared rice (鮨飯, sushi-meshi), usually with some sugar and salt, accompanied by a variety of ingredients (ねた, neta), such as seafood, often raw, and vegetables. Styles of sushi and its presentation vary widely, but the one key ingredient is "sushi rice", also referred to as shari (しゃり), or sumeshi (酢飯).
The inventor of modern sushi is believed to be Hanaya Yohei, who invented nigiri-zushi, a type of sushi most known today, in which seafood is placed on hand-pressed vinegared rice, around 1824 in the Edo period (1603–1867). It was the fast food of the chōnin class in the Edo period.
Sushi is traditionally made with medium-grain white rice, though it can be prepared with brown rice or short-grain rice. It is very often prepared with seafood, such as squid, eel, yellowtail, salmon, tuna or imitation crab meat. Many types of sushi are vegetarian. It is often served with pickled ginger (gari), wasabi, and soy sauce. Daikon radish or pickled daikon (takuan) are popular garnishes for the dish.
Sushi is sometimes confused with sashimi, a similar dish in Japanese cuisine that consists of thinly sliced raw fish or occasionally meat.
|
What is a Japanese dish of prepared rice, usually with some sugar and salt, accompanied by a variety of ingredients, such as seafood, often raw, and vegetables?
|
Sushi is the answer to the question.
|
null | false
| null |
What type of car should I buy if I live in a city?
|
When living in the city, it is important to drive a vehicle that is compact enough to fit in small parking spaces and one that has sufficient fuel efficiency to drive for longer periods of time without requiring additional fuel. This is due to the scarcity of gas stations within city limits.
Small to mid-sized sedans or other compact cars with either good fuel efficiency or electric batteries are best for city driving. Examples include Toyota Prius, Volkswagen Golf, or Tesla Model 3.
It is imperative to avoid large vehicles when driving in the city. Samples vehicles to avoid include Ford F-150 pick up truck, Chevrolet Suburban, or Hummer.
|
|
null | false
| 6
|
Abstractive Summarization. Abstractive summarization aims to generate coherent summaries with high readability, and has seen increasing interest and improved performance due to the emergence of seq2seq models BIBREF8 and attention mechanisms BIBREF9. For example, BIBREF0, BIBREF2, and BIBREF1 in addition to using encoder-decoder model with attention, they used pointer networks to solve the out of vocabulary issue, while BIBREF0 used coverage mechanism to solve the problem of word repetition. In addition, BIBREF2 and BIBREF10 used reinforcement learning in an end-to-end setting.
To our knowledge, training such neural abstractive summarization models in low resource domains using domain transfer has not been thoroughly explored on domains different than news. For example, BIBREF4 reported the results of training on CNN/DM data while evaluating on DUC data without any tuning. Note that these two datasets are both in the news domain, and both consist of well written, structured documents. The domain transfer experiments of BIBREF1 similarly used two different news summarization datasets (CNN/DM and NYT). Our work differs in several ways from these two prior domain transfer efforts. First, our experiments involve two entirely different domains: news and student reflections. Unlike news, student reflection documents lack global structure, are repetitive, and contain many sentence fragments and grammatical mistakes. Second, the prior approaches either trained a part of the model using NYT data while retaining the other part of the model trained only on CNN/DM data BIBREF1, or didn't perform any tuning at all BIBREF4. In contrast, we do the training in two consecutive phases, pretraining and fine tuning. Finally, BIBREF1 reported that while training with domain transfer outperformed training only on out-of-domain data, it was not able to beat training only on in-domain data. This is likely because their in and out-of-domain data sizes are comparable, unlike in our case of scarce in-domain data.
In a different approach to abstractive summarization, BIBREF11 developed a soft template based neural method consisting of an end-to-end deep model for template retrieval, reranking and summary rewriting. While we also develop a template based model, our work differs in both model structure and purpose.
Data Synthesis. Data synthesis for text summarization is underexplored, with most prior work focusing on machine translation, and text normalization. BIBREF12 proposed doing data augmentation through word replacement, using WordNet BIBREF13 and vector space similarity, respectively. We will use a WordNet replacement method as a baseline synthesis method in the experiments described below. In contrast, BIBREF14 synthesized/augmented data through back-translation and word replacement using language models. BIBREF15 is another recent work that was done in parallel and is very close to ours. However, in addition to the difference in both our and their model, we think it might be infeasible to back generate student reflections from a human summary, especially an abstractive one.
First, our experiments involve two entirely different domains: news and student reflections.
|
What domains do the authors' experiments involve?
|
News and student reflections.
|
null | false
| null |
What is mutually assured destruction?
|
Mutually assured destruction is the belief that two countries will not attack each other if each country has the ability to destroy the other with the same amount of military power. Mutually assured destruction or "MAD" is a principle from the era of the Cold War and is considered part of a US military doctrine. The policy is often applied to the use of nuclear weapons. For example, because both the United States and Russia have nuclear weapons that could annihilate large populations, neither uses them. To do so, would mean the end of both populations, and a large part of humanity.
|
|
1603.01417
| false
| null |
We analyze the DMN components, specifically the input module and memory module, to improve question answering. We propose a new input module which uses a two level encoder with a sentence reader and input fusion layer to allow for information flow between sentences. For the memory, we propose a modification to gated recurrent units (GRU) BIBREF7 . The new GRU formulation incorporates attention gates that are computed using global knowledge over the facts. Unlike before, the new DMN+ model does not require that supporting facts (i.e. the facts that are relevant for answering a particular question) are labeled during training. The model learns to select the important facts from a larger set.
the facts that are relevant for answering a particular question) are labeled during training.
|
What does supporting fact supervision mean?
|
The answers are shown as follows:
* the facts that are relevant for answering a particular question) are labeled during training.
|
null | false
| 7
|
This paper provided a taxonomy of stereotype-driven descriptions in the Flickr30K dataset. I have divided these descriptions into two classes: linguistic bias and unwarranted inferences. The former corresponds to the annotators' choice of words when confronted with an image that may or may not match their stereotypical expectancies. The latter corresponds to the tendency of annotators to go beyond what the physical data can tell us, and expand their descriptions based on their past experiences and knowledge of the world. Acknowledging these phenomena is important, because on the one hand it helps us think about what is learnable from the data, and on the other hand it serves as a warning: if we train and evaluate language models on this data, we are effectively teaching them to be biased.
I have also looked at methods to detect stereotype-driven descriptions, but due to the richness of language it is difficult to find an automated measure. Depending on whether your goal is production or interpretation, it may either be useful to suppress or to emphasize biases in human language. Finally, I have discussed stereotyping behavior as the addition of a contextual layer on top of a more basic description. This raises the question what kind of descriptions we would like our models to produce.
Depending on whether your goal is production or interpretation, it may either be useful to suppress or to emphasize biases in human language.
|
What does the author conclude?
|
Depending on whether your goal is production or interpretation, it may either be useful to suppress or to emphasize biases in human language.
|
null | false
| null |
Is California a great place to live?
|
Many people are drawn to California for the diverse landscapes, sunshine, career opportunities and beautiful landmarks, however there are considerations when choosing the Golden State.
California provides access to many beautiful and diverse topographical locations. The oceans that stretch from Northern California through to Mexico provide a rich wildlife habitat and amazing surf spots. The Sierras, the mountain range a short drive away, allow for skiing and snowboarding for a majority of the year at locations that have been home to the Winter Olympics. The Central Valley is the home of farmlands and ranches and seem to stretch for miles. California provides natural wonders from forests to desserts to mountain peaks.
California's climate in most parts is mild throughout the year. It is joked about by many, that it has one season. However, with the onset of global warming the weather patterns have seen a change. The extreme droughts have been followed by extreme rain and flooding. Although there has been more extreme weather across the globe, California still has a predominantly mild climate for a majority of the year making it a top destination for those looking for sunshine.
California has been known as the Golden State. It has provided golden opportunities for many who have come from all over the world to make their dreams come true. During the Gold Rush in the 1800s, miners came in from all over to find gold nuggets in the Sacramento Valley. Hollywood, the home of movies and television for the world, has been the mecca of many young person looking to find success in acting and celebrity since the early 1900s. Silicon Valley has birthed some of the most successful companies in the world, showcasing the talents of students and entrepreneurs, changing the trajectory and pace of innovation. California is the home to many of the most transformative companies providing a wealth of job opportunities. California is the showcase of the American dream.
One can spend years visiting landmarks and interesting locations in California. The Golden Gate Bridge is a must see in San Francisco, it is one of the most historic bridges and the doorway to the San Francisco Bay. Yosemite is a glorious national park and home to gorgeous waterfall, hiking and Half Dome. Along the coast, visiting Big Sur is a must, showcasing the sky high redwoods and a priceless view of the coastline. Palm Springs provides a fun dessert location that is the home to music festivals, incredible golfing and natural hot springs. Another point of interest, Lake Tahoe, a gorgeous Lake that is truly breath taking. It is surrounded by mountains of snow in the winter and the home to water skiers in the summer. This is just a taste of the amazing locations California has to offer.
California truly has a treasure trove of benefits for visitors and residents, however it has been struggling with issues that can threaten its future and make it difficult for those that choose to call it home. California has very high taxes for both residents and business owners. The population boom has also created a lack of low cost housing, California is home to the some of the highest housing prices in the United States. The high taxes and high priced housing makes this state out of reach for many. Additionally, the state needs to work on investments in public education, infrastructure and homelessness which have all been suffering in recent years.
Despite some of it's struggles, the Golden State continues to be golden due to amazing locations, an incredible climate, a diverse population and unmatched opportunity. California is a unique gem on the west coast for many to choose to live and visit.
|
|
1910.04887
| false
| null |
To adapt the FactorCell BIBREF4 for our purposes, we replace user embeddings with a low-dimensional image representation. Thus, we are able to modify each query completion to be personalized to a specific image representation. We extract features from an input image using a CNN pretrained on ImageNet, retraining only the last two fully connected layers. The image feature vector is fed into the FactorCell through the adaptation matrix. We perform beam search over the sequence of predicted characters to chose the optimal completion for the given prefix.
To adapt the FactorCell BIBREF4 for our purposes, we replace user embeddings with a low-dimensional image representation. Thus, we are able to modify each query completion to be personalized to a specific image representation.
|
How they complete a user query prefix conditioned upon an image?
|
The answers are shown as follows:
* we replace user embeddings with a low-dimensional image representation
|
null | false
| 214
|
Our datasets can be used for various hate speech tasks. In this paper, we focus on generative hate speech intervention.
The goal of this task is to generate a response to hate speech that can mitigate its use during a conversation. The objective can be formulated as the following equation:
where $c$ is the conversation, $r$ is the corresponding intervention response, and $D$ is the dataset. This task is closely related to the response generation and dialog generation, though several differences exist including dialog length, language cadence, and word imbalances. As a baseline, we chose the most common methods of these two tasks, such as Seq2Seq and VAE, to determine the initial feasibility of automatically generate intervention responses. More recent Reinforcement Learning method for dialog generation BIBREF21 can also be applied to this task with slight modification. Future work will explore more complex, and unique models.
Similar to BIBREF21, a generative model is considered as an agent. However, different from dialog generation, generative intervention does not have multiple turns of utterance, so the action of the agent is to select a token in the response. The state of the agent is given by the input posts and the previously generated tokens. Another result due to this difference is that the rewards with regard to ease of answering or information flow do not apply to this case, but the reward for semantic coherence does. Therefore, the reward of the agent is:
where $rw(c,r)$ is the reward with regard to the conversation $c$ and its reference response $r$ in the dataset. $p(r|c)$ denotes the probability of generating response $r$ given the conversation $c$, and $p_{back}(c|r)$ denotes the backward probability of generating the conversation based on the response, which is parameterized by another generation network. The reward is a weighted combination of these two parts, which are observed after the agent finishing generating the response. We refer the readers to BIBREF21 for details.
The decoder consists of 2 GRU layers followed by a 3-layer MLP (Multi-Layer Perceptron).
|
What does the decoder consist of?
|
The decoder consists of 2 GRU layers followed by a 3-layer MLP (Multi-Layer Perceptron).
|
null | false
| null |
Describe to me in clear detail what the amazon logo looks like.
|
The Amazon logo is a curved, yellow arrow pointing to the right. The arrow head is separated from the stem. Sometimes the brand will be colored black instead but it is a well recognized logo regardless. It is a simple design and used throughout all products that Amazon owns including Amazon Prime, Amazon.com, Amazon Go, and more.
|
|
1910.11949
| false
| null |
We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions. COCO dataset includes images of complex everyday scenes containing common objects in their natural context, but it is limited in terms of the concepts it covers. Bing dataset contains more event related questions and has a wider range of questions longitudes (between 3 and 20 words), while Flickr questions are shorter (less than 6 words) and the images appear to be more casual.
We use two datasets to train our chatbot model. The first one is the Persona-chat BIBREF15 which contains dialogues between two people with different profiles that are trying to know each other. It is complemented by the Cornell-movie dialogues dataset BIBREF27, which contains a collection of fictional conversations extracted from raw movie scripts. Persona-chat's sentences have a maximum of 15 words, making it easier to learn for machines and a total of 162,064 utterances over 10,907 dialogues. While Cornell-movie dataset contains 304,713 utterances over 220,579 conversational exchanges between 10,292 pairs of movie characters.
We use MS COCO, Bing and Flickr datasets from BIBREF26 to train the model that generates questions. These datasets contain natural questions about images with the purpose of knowing more about the picture. As can be seen in the Figure FIGREF8, questions cannot be answered by only looking at the image. Each source contains 5,000 images with 5 questions per image, adding a total of 15,000 images with 75,000 questions.
We use two datasets to train our chatbot model. The first one is the Persona-chat BIBREF15 which contains dialogues between two people with different profiles that are trying to know each other. It is complemented by the Cornell-movie dialogues dataset BIBREF27, which contains a collection of fictional conversations extracted from raw movie scripts. Persona-chat's sentences have a maximum of 15 words, making it easier to learn for machines and a total of 162,064 utterances over 10,907 dialogues. While Cornell-movie dataset contains 304,713 utterances over 220,579 conversational exchanges between 10,292 pairs of movie characters.
|
How big dataset is used for training this system?
|
For the question generation model 15,000 images with 75,000 questions. For the chatbot model, around 460k utterances over 230k dialogues.
|
null | false
| null |
The Marinid Sultanate was a Berber Muslim empire from the mid-13th to the 15th century which controlled present-day Morocco and, intermittently, other parts of North Africa (Algeria and Tunisia) and of the southern Iberian Peninsula (Spain) around Gibraltar. It was named after the Banu Marin (Arabic: بنو مرين, Berber: Ayt Mrin), a Zenata Berber tribe. The sultanate was ruled by the Marinid dynasty (Arabic: المرينيون al-marīniyyūn), founded by Abd al-Haqq I.
|
Who are el Marinid ?
|
The Marinid was a berber muslim empire, they were founded in the mid-13th and rulled until the 15 century. They controlled Morocco, Algeria, Tunisia and Al Andalus around Gibraltar.
It was named after the Banu Marin, a zenata Berber Tribe.
|
null | false
| null |
How many cents do I have if I have two nickels?
|
If you have two nickels, you have 10 cents.
|
|
null | false
| null |
What is a Wonton?
|
A wonton is a type of Chinese dumpling that is generally served in a soup broth. The dumpling filling is placed onto a thin dough wrapper, then the wrapper is then sealed to prevent the filling from spilling out when cooking. The filling is commonly made with pork and shrimp. Filled wonton dumplings are then placed into boiling water for cooking. Wontons can also be deep fried with a variety of fillings.
|
|
null | false
| null |
What is Scotland's largest lake?
|
Loch Ness is the largest lake with a volume of 7.45 km³.
|
|
null | false
| null |
What is the Von Neumann architecture?
|
Nearly all modern computers are based on this famous architecture consisting of a central processing unit, a memory for storing instructions and data, and a bus to transmit data between components including input and output devices.
|
|
null | false
| null |
Cesar Chavez (born Cesario Estrada Chavez /ˈtʃɑːvɛz/; Spanish: [ˈt͡ʃaβes]; March 31, 1927 – April 23, 1993) was an American labor leader and civil rights activist. Along with Dolores Huerta, he co-founded the National Farm Workers Association (NFWA), which later merged with the Agricultural Workers Organizing Committee (AWOC) to become the United Farm Workers (UFW) labor union. Ideologically, his world-view combined leftist politics with Catholic social teachings.
Born in Yuma, Arizona to a Mexican American family, Chavez began his working life as a manual laborer before spending two years in the United States Navy. Relocating to California, where he married, he got involved in the Community Service Organization (CSO), through which he helped laborers register to vote. In 1959, he became the CSO's national director, a position based in Los Angeles. In 1962, he left the CSO to co-found the NFWA, based in Delano, California, through which he launched an insurance scheme, a credit union, and the El Malcriado newspaper for farmworkers. Later that decade he began organizing strikes among farmworkers, most notably the successful Delano grape strike of 1965–1970. Amid the grape strike his NFWA merged with Larry Itliong's AWOC to form the UFW in 1967. Influenced by the Indian independence leader Mahatma Gandhi, Chavez emphasized direct but nonviolent tactics, including pickets and boycotts, to pressure farm owners into granting strikers' demands. He imbued his campaigns with Roman Catholic symbolism, including public processions, masses, and fasts. He received much support from labor and leftist groups but was monitored by the Federal Bureau of Investigation (FBI).
In the early 1970s, Chavez sought to expand the UFW's influence outside California by opening branches in other U.S. states. Viewing illegal immigrants as a major source of strike-breakers, he also pushed a campaign against illegal immigration into the U.S., which generated violence along the U.S.-Mexico border and caused schisms with many of the UFW's allies. Interested in co-operatives as a form of organization, he established a remote commune at Keene. His increased isolation and emphasis on unrelenting campaigning alienated many California farmworkers who had previously supported him and by 1973 the UFW had lost most of the contracts and membership it won during the late 1960s. His alliance with California Governor Jerry Brown helped ensure the passing of the California Agricultural Labor Relations Act of 1975, although the UFW's campaign to get its measures enshrined in California's constitution failed. Influenced by the Synanon religious organization, Chavez re-emphasized communal living and purged perceived opponents. Membership of the UFW dwindled in the 1980s, with Chavez refocusing on anti-pesticide campaigns and moving into real-estate development, generating controversy for his use of non-unionized laborers.
A controversial figure, UFW critics raised concerns about Chavez's autocratic control of the union, the purges of those he deemed disloyal, and the personality cult built around him, while farm-owners considered him a communist subversive. He became an icon for organized labor and leftist groups in the U.S. and posthumously became a "folk saint" among Mexican Americans. His birthday is a federal commemorative holiday in several U.S. states, while many places are named after him, and in 1994 he posthumously received the Presidential Medal of Freedom.
|
Without quoting directly from the text give me a summary of Cesar Chavez greatest achievements.
|
Cesar Chavez is widely regarded as the folk hero fighting for the rights of farm workers in the 1960's and 1970's. He is the co-founder of the farm workers union and helped to pass laws in California protecting farm workers. He worked on expanding the movement throughout the United States but had limited success.
|
null | false
| null |
Where was the first capital of Alaska?
|
Alaska's first capital was in Sitka, following the transfer of the Alaska territory from Russia to the United States in 1867, and remained the capital until 1906 when it transferred to Juneau.
|
|
null | false
| 193
|
Natural language inference (NLI) is a widely-studied problem in natural language processing. It aims at comparing a pair of sentences (i.e. a premise and a hypothesis), and inferring the relationship between them (i.e., entailment, neutral and contradiction). Large-scaled datasets like SNLI BIBREF0 and MultiNLI BIBREF1 have been created by crowd-sourcing and fertilized NLI research substantially.
However, several works BIBREF2, BIBREF3, BIBREF4 have pointed out that crowd-sourcing workers have brought a bias pattern named annotation artifacts in these NLI datasets. Such artifacts in hypotheses can reveal the labels and make it possible to predict the labels solely by looking at the hypotheses. For example, models trained on SNLI with only the hypotheses can achieve an accuracy of 67.0%, despite the always predicting the majority-class baseline is only 34.3% BIBREF2.
Classifiers trained on NLI datasets are supposed to make predictions by understanding the semantic relationships between given sentence pairs. However, it is shown that models are unintentionally utilizing the annotation artifacts BIBREF4, BIBREF2. If the evaluation is conducted under a similar distribution as the training data, e.g., with the given testing set, models will enjoy additional advantages, making the evaluation results over-estimated. On the other hand, if the bias pattern cannot be generalized to the real-world, it may introduce noise to models, thus hurting the generalization ability.
In this paper, we use cross-dataset testing to better assess models' generalization ability. We investigate the impacts of annotation artifacts in cross-dataset testing. Furthermore, we propose an easy-adopting debiasing training framework, which doesn't require any additional data or annotations, and apply it to the high-performing Densely Interactive Inference Network BIBREF5. Experiments show that our method can effectively mitigate the bias pattern and improve the cross-dataset generalization ability of models. To the best of our knowledge, our work is the first attempt to alleviate the annotation artifacts without any extra resources.
Furthermore, we propose an easy-adopting debiasing training framework, which doesn't require any additional data or annotations, and apply it to the high-performing Densely Interactive Inference Network.
|
What is the easy-adopting debiasing training framework?
|
It doesn't require any additional data or annotations, and apply it to the high-performing Densely Interactive Inference Network.
|
null | false
| null |
The Grateful Dead was founded in the San Francisco Bay Area amid the rise of the counterculture of the 1960s. The founding members were Jerry Garcia (lead guitar, vocals), Bob Weir (rhythm guitar, vocals), Ron "Pigpen" McKernan (keyboards, harmonica, vocals), Phil Lesh (bass, vocals), and Bill Kreutzmann (drums). Members of the Grateful Dead, originally known as the Warlocks, had played together in various Bay Area ensembles, including the traditional jug band Mother McCree's Uptown Jug Champions. Lesh was the last member to join the Warlocks before they changed their name to the Grateful Dead; replacing Dana Morgan Jr., who had played bass for a few gigs. Drummer Mickey Hart and non-performing lyricist Robert Hunter joined in 1967. With the exception of McKernan, who died in 1973, and Hart, who took time off from 1971 to 1974, the core of the band stayed together for its entire 30-year history. The other official members of the band are Tom Constanten (keyboards; 1968–1970), John Perry Barlow (non-performing lyricist; 1971–1995), Keith Godchaux (keyboards, occasional vocals; 1971–1979), Donna Godchaux (vocals; 1972–1979), Brent Mydland (keyboards, vocals; 1979–1990), and Vince Welnick (keyboards, vocals; 1990–1995). Bruce Hornsby (accordion, piano, vocals) was a touring member from 1990 to 1992, as well as a guest with the band on occasion before and after the tours.
|
Given the paragraph about the Grateful Dead, tell me when they were founded, who were the original members, and what other members were in the group.
|
The Grateful Dead were founded in the 1960's as the Warlocks by Jerry Garcia, Bob Weir, Ron "Pigpen" McKernan, Phil Lesh and Bill Kreutzmann. Over the years, the band had many other members, including Tom Constanten, John Perry Barlow, Keith Godchaux, Donna Godchaux, Brent Mydland, Vince Welnick and Bruce Hornsby.
|
null | false
| null |
Hermenegildo Sosa (born April 12, 1946) is a Mexican painter and art professor who is best known for colorful depictions of Mexican rural landscapes, especially those of his home state of Tlaxcala. He was born there into a farming family, whose economic condition delayed his education as he had to work from a young age. In his teens he arrived to Mexico City to work as a domestic, but this allowed him to attend school, including painting classes. Eventually, he entered the Escuela Nacional de Pintura, Escultura y Grabado "La Esmeralda", where he eventually gained a career as a professor.
|
Who is Hermenegildo Sosa and what is he known for?
|
Hermenegildo Sosa is a Mexican painter and art professor born in 1946. He is best known for his colorful depictions of Mexican rural landscapes, especially those of him home state in Tlaxcala.
|
1905.06906
| false
| null |
Amazon Reviews Dataset BIBREF24 is a large dataset with millions of reviews from different product categories. For our experiments, we consider a subset of 20000 reviews from the domains Cell Phones and Accessories(C), Clothing and Shoes(S), Home and Kitchen(H) and Tools and Home Improvement(T). Out of 20000 reviews, 10000 are positive and 10000 are negative. We use 12800 reviews for training, 3200 reviews for validation and 4000 reviews for testing from each domain.
Amazon Reviews Dataset BIBREF24 is a large dataset with millions of reviews from different product categories. For our experiments, we consider a subset of 20000 reviews from the domains Cell Phones and Accessories(C), Clothing and Shoes(S), Home and Kitchen(H) and Tools and Home Improvement(T). Out of 20000 reviews, 10000 are positive and 10000 are negative. We use 12800 reviews for training, 3200 reviews for validation and 4000 reviews for testing from each domain.
|
For the purposes of this paper, how is something determined to be domain specific knowledge?
|
reviews under distinct product categories are considered specific domain knowledge
|
null | false
| null |
What makes the banded bullfrog special
|
The banded bullfrog (Kaloula pulchra) is a species of frog in the narrow-mouthed frog family Microhylidae. Native to Southeast Asia, it is also known as the Asian painted frog, digging frog, Malaysian bullfrog, common Asian frog, and painted balloon frog. In the pet trade, it is sometimes called the chubby frog. Adults measure 5.4 to 7.5 cm (2.1 to 3.0 in) and have a dark brown back with stripes that vary from copper-brown to salmon pink.
The banded bullfrog lives at low altitudes and is found in both urban and rural settings, as well as in forest habitats. They bury themselves underground during dry periods and emerge after heavy rainfall to emit calls and breed. They feed primarily on ants and termites; predators of adults and tadpoles include snakes, dragonfly larvae, and snails. When threatened, they inflate their lungs and secrete a noxious white substance. The species is prevalent in the pet trade and is a potential invasive species being introduced in Taiwan, the Philippines, Guam, Singapore, Borneo, and Sulawesi.
|
|
null | false
| null |
What are the best months to visit Alaska?
|
Spring through summer are the best months to visit Alaska owing to warmer temperatures that fluctuate between 60F - 67F and the long days make it ideal for tourists to explore the state.
|
|
null | false
| null |
Rómulo Ernesto Betancourt Bello (22 February 1908 – 28 September 1981; Spanish pronunciation: [ˈromulo betaŋˈkuɾ]), known as "The Father of Venezuelan Democracy", was the president of Venezuela, serving from 1945 to 1948 and again from 1959 to 1964, as well as leader of Acción Democrática, Venezuela's dominant political party in the 20th century.
|
Who is Romulo Betancourt?
|
Rómulo Ernesto Betancourt Bello (22 February 1908 – 28 September 1981; Spanish pronunciation: [ˈromulo betaŋˈkuɾ]), known as "The Father of Venezuelan Democracy", was the president of Venezuela, serving from 1945 to 1948 and again from 1959 to 1964, as well as leader of Acción Democrática, Venezuela's dominant political party in the 20th century.
Betancourt, one of Venezuela's most important political figures, led a tumultuous career in Latin American politics. Periods of exile brought Betancourt in contact with various Latin American countries as well as the United States, securing his legacy as one of the most prominent international leaders to emerge from 20th-century Latin America. Scholars credit Betancourt as the Founding Father of modern democratic Venezuela.
|
null | false
| null |
Identify which instrument is string or percussion: Hi-hat, Vihuela
|
Vihuela is string, Hi-hat is percussion.
|
|
null | false
| null |
Choose the word which is different from the rest: curd, butter, oil, cheese, cream
|
All except Oil are products obtained from milk.
|
|
null | false
| null |
Classify each of the following as either a title by Margaret Atwood or Mary Shelley: The Handmaid's Tale, The Testaments, The Edible Woman, Oryx and Crake, The Year of the Flood, Frankenstein, The Last Man, Lodore, The Mortal Immortal, Maurice
|
Margaret Atwood: The Handmaid's Tale, The Testaments, The Edible Woman, Oryx and Crake, The Year of the Flood
Mary Shelley: Frankenstein, The Last Man, Lodore, The Mortal Immortal, Maurice
|
|
null | false
| null |
What are the Chinese dialects that people speak in China?
|
While most people's native language is Mandarin in China. There are 9 types of dialects which include Wu, Gan, Xiang, Min, Hakka, Yue, Jin, Huizhou ad Pinghua. These dialects share the same Chinese character as Mandarin which means the formal writing is the same. But they all have their own unique pronunciation which makes it hard to understand between the speakers of different dialects.
|
|
null | false
| null |
Give me a list of popular DTC brands
|
- Allbirds
- Away
- Warby Parker
- Stitch Fix
- Harry's
- Peloton
- Casper
- Birchbox
- Shein
- Chubbies
|
|
1811.02906
| false
| null |
The baseline classifier uses a linear Support Vector Machine BIBREF7 , which is suited for a high number of features. We use a text classification framework for German BIBREF8 that has been used successfully for sentiment analysis before.
The baseline classifier uses a linear Support Vector Machine BIBREF7 , which is suited for a high number of features.
|
What baseline is used?
|
SVM
|
null | false
| null |
In 2013, Rolex created the first two-tone ceramic bezel, in blue and black, for the steel GMT-Master II nicknamed “Batman”. The blue and black bezel is made of Cerachrom, Rolex's patented version of ceramic; it is produced through a patented process of creating two colors out of a mono-block Cerachrom bezel.
|
When did Rolex introduce the Batman GMT
|
The Batman was introduced in 2013 with a black and blue bezel
|
null | false
| 171
|
In order to assess the perceived appropriateness of system responses we conduct a human study using crowd-sourcing on the FigureEight platform. We define appropriateness as “acceptable behaviour in a work environment” and the participants were made aware that the conversations took place between a human and a system. Ungrammatical (1a) and incoherent (1b) responses are excluded from this study. We collect appropriateness ratings given a stimulus (the prompt) and four randomly sampled responses from our corpus that the worker is to label following the methodology described in BIBREF21, where each utterance is rated relatively to a reference on a user-defined scale. Ratings are then normalised on a scale from [0-1]. This methodology was shown to produce more reliable user ratings than commonly used Likert Scales. In addition, we collect demographic information, including gender and age group. In total we collected 9960 HITs from 472 crowd workers. In order to identify spammers and unsuitable ratings, we use the responses from the adult-only bots as test questions: We remove users who give high ratings to sexual bot responses the majority (more than 55%) of the time.18,826 scores remain - resulting in an average of 7.7 ratings per individual system reply and 1568.8 ratings per response type as listed in Table TABREF14.Due to missing demographic data - and after removing malicious crowdworkers - we only consider a subset of 190 raters for our demographic study. The group is composed of 130 men and 60 women. Most raters (62.6%) are under the age of 44, with similar proportions across age groups for men and women. This is in-line with our target population: 57% of users of smart speakers are male and the majority are under 44 BIBREF22.
In addition, we collect demographic information, including gender and age group.
|
What does the demographic information include?
|
Gender and age group.
|
null | false
| null |
What is the Phillips curve?
|
In economics, the Phillips curve is the short-run tradeoff between the unemployment rate and inflation. The relationship implies a negative correlation such that an increase in the unemployment rate is associated with a decrease in wage rate. Conversely, during times in which the unemployment rate is quite low (strong demand for labor) the growth of wages is typically observed to be quite high, or increasing. While this phenomenon has been observed in the short-run there are some periods in which the relationship does not hold.
|
|
null | false
| null |
What is a Put option in finance?
|
This is a right to sell foreign currency at a particular exchange rate.
|
|
null | false
| 310
|
During training, we find that the model can easily overfit the training data, which yields poor performance on the test and validation sets. To make the model more stable on the unseen data, we apply dropout to the word embeddings, POS embeddings, vectors after the convolutional layers and the stacked recurrent layers. In Figure FIGREF24 , we examine the accuracies dropout rates in INLINEFORM0 . We find that adding dropout alleviates overfitting issues on the training set. If we reduce the dropout rate to INLINEFORM1 , which means randomly setting some values to zero with probability INLINEFORM2 , the training F1-Multi increases rapidly and the validation F1-multi score is the lowest among all the settings. Preliminary results proved best for a dropout rate of INLINEFORM3 , so we use this in all the experiments.
We find that adding dropout alleviates overfitting issues on the training set. If we reduce the dropout rate to 0.1, which means randomly setting some values to zero with probability 0.1, the training F1- Multi increases rapidly and the validation F1-multi score is the lowest among all the settings. Preliminary results proved best for a dropout rate of 0.3, so we use this in all the experiments.
|
How much is the dropout rate when the overfitting issues can be alleviated effectively?
|
0.3.
|
null | false
| 3
|
Deep Learning approaches have achieved impressive results on various NLP tasks BIBREF0 , BIBREF1 , BIBREF2 and have become the de facto approach for any NLP task. However, these deep learning techniques have found to be less effective for low-resource languages when the available training data is very less BIBREF3 . Recently, several approaches like Multi-task learning BIBREF4 , multilingual learning BIBREF5 , semi-supervised learning BIBREF2 , BIBREF6 and transfer learning BIBREF7 , BIBREF3 have been explored by the deep learning community to overcome data sparsity in low-resource languages. Transfer learning trains a model for a parent task and fine-tunes the learned parent model weights (features) for a related child task BIBREF7 , BIBREF8 . This effectively reduces the requirement on training data for the child task as the model would have learned relevant features from the parent task data thereby, improving the performance on the child task.
Transfer learning has also been explored in the multilingual Neural Machine Translation BIBREF3 , BIBREF9 , BIBREF10 . The goal is to improve the NMT performance on the source to target language pair (child task) using an assisting source language (assisting to target translation is the parent task). Here, the parent model is trained on the assisting and target language parallel corpus and the trained weights are used to initialize the child model. The child model can now be fine-tuned on the source-target language pairs, if parallel corpus is available. The divergence between the source and the assisting language can adversely impact the benefits obtained from transfer learning. Multiple studies have shown that transfer learning works best when the languages are related BIBREF3 , BIBREF10 , BIBREF9 . Several studies have tried to address lexical divergence between the source and the target languages BIBREF10 , BIBREF11 , BIBREF12 . However, the effect of word order divergence and its mitigation has not been explored. In a practical setting, it is not uncommon to have source and assisting languages with different word order. For instance, it is possible to find parallel corpora between English and some Indian languages, but very little parallel corpora between Indian languages. Hence, it is natural to use English as an assisting language for inter-Indian language translation.
To see how word order divergence can be detrimental, let us consider the case of the standard RNN (Bi-LSTM) encoder-attention-decoder architecture BIBREF13 . The encoder generates contextual representations (annotation vectors) for each source word, which are used by the attention network to match the source words to the current decoder state. The contextual representation is word-order dependent. Hence, if the assisting and the source languages do not have similar word order the generated contextual representations will not be consistent. The attention network (and hence the decoder) sees different contextual representations for similar words in parallel sentences across different languages. This makes it difficult to transfer knowledge learned from the assisting language to the source language.
We illustrate this by visualizing the contextual representations generated by the encoder of an English to Hindi NMT system for two versions of the English input: (a) original word order (SVO) (b) word order of the source language (SOV, for Bengali). Figure FIGREF1 shows that the encoder representations obtained are very different. The attention network and the decoder now have to work with very different representations. Note that the plot below does not take into account further lexical and other divergences between source and assisting languages, since we demonstrated word order divergence with the same language on the source side.
To address this word order divergence, we propose to pre-order the assisting language sentences to match the word order of the source language. We consider an extremely resource constrained scenario, where we do not have any parallel corpus for the child task. We are limited to a bilingual dictionary for transfer information from the assisting to the source language. From our experiments, we show that there is a significant increase in the translation accuracy for the unseen source-target language pair.
In a practical setting, it is not uncommon to have source and assisting languages with different word order.
|
Is it common to have source and assisting languages with different word order in a practical setting?
|
Yes, it is.
|
null | false
| 203
|
Our model is based on a standard stacked character-based LSTM BIBREF4 with two layers, followed by a hidden layer and a final output layer with softmax activations. The only modification made to accommodate the fact that we train the model with text in nearly a thousand languages, rather than one, is that language embedding vectors are concatenated to the inputs of the LSTMs at each time step and the hidden layer before the softmax. We used three separate embeddings for these levels, in an attempt to capture different types of information about languages. The model structure is summarized in fig:model.
In our experiments we use 1024-dimensional LSTMs, 128-dimensional character embeddings, and 64-dimensional language embeddings. Layer normalization BIBREF5 is used, but no dropout or other regularization since the amount of data is very large (about 3 billion characters) and training examples are seen at most twice. For smaller models early stopping is used. We use Adam BIBREF6 for optimization. Training takes between an hour and a few days on a K40 GPU, depending on the data size.
Training takes between an hour and a few days on a K40 GPU, depending on the data size.
|
What GPU do they conduct raining on?
|
A K40 GPU
|
null | false
| null |
In the Oracle RDBMS environment, redo logs comprise files in a proprietary format which log a history of all changes made to the database. Each redo log file consists of redo records. A redo record, also called a redo entry, holds a group of change vectors, each of which describes or represents a change made to a single block in the database.
For example, if a user UPDATEs a salary-value in a table containing employee-related data, the DBMS generates a redo record containing change-vectors that describe changes to the data segment block for the table. And if the user then COMMITs the update, Oracle generates another redo record and assigns the change a "system change number" (SCN).
Whenever something changes in a datafile, Oracle records the change in the redo log. The name redo log indicates its purpose: If the database crashes, the RDBMS can redo (re-process) all changes on datafiles which will take the database data back to the state it was when the last redo record was written. DBAs use the views V$LOG, V$LOGFILE, V$LOG_HISTORY and V$THREAD to find information about the redo log of the database. Each redo log file belongs to exactly one group (of which at least two must exist). Exactly one of these groups is the CURRENT group (can be queried using the column status of v$log). Oracle uses that current group to write the redo log entries. When the group is full, a log switch occurs, making another group the current one. Each log switch causes checkpoint, however, the converse is not true: a checkpoint does not cause a redo log switch. One can also manually cause a redo-log switch using the ALTER SYSTEM SWITCH LOGFILE command.
|
What are redo logs in Oracle RDBMS?
|
In Oracle RDBM, redo logs are the logs which are used to maintain the history of changes in a database. Each redo log linked to a data file has multiple redo records which hold group of change vectors. Redo logs can be used to apply the changes to Database in case of crashed and is also useful to create incremental backups.
|
null | false
| 46
|
We utilize fasttext BIBREF18 to generate the embeddings of all words from both Text-SQuAD and Spoken-SQuAD. We adopt the phoneme sequence embeddings to replace the original character sequence embeddings using the method proposed by Li et al. BIBREF5 . The source domain model and the target domain model share the same set of word embedding matrix to improve the alignment between these two domains.
W-GAN is adopted for our domain discriminator BIBREF19 , which stacks 5 residual blocks of 1D convolutional layers with 96 filters and filter size 5 followed by one linear layer to convert each input vector sequence into one scalar value.
All models used in the experiments are trained with batch size 20, using adam with learning rate INLINEFORM0 and the early stop strategy. The dimension of the hidden state is set to 96 for all layers, and the number of self-attention heads is set to 2. The setup is slightly different but better than the setting suggested by the original QAnet.
Spoken-SQuAD [6] is an automatically generated corpus in which the document is in spoken form and the question is in text form.
|
What is Spoken-SQuAD?
|
Spoken-SQuAD [6] is an automatically generated corpus in which the document is in spoken form and the question is in text form.
|
null | false
| 95
|
People are increasingly using social networking platforms such as Twitter, Facebook, YouTube, etc. to communicate their opinions and share information. Although the interactions among users on these platforms can lead to constructive conversations, they have been increasingly exploited for the propagation of abusive language and the organization of hate-based activities BIBREF0, BIBREF1, especially due to the mobility and anonymous environment of these online platforms. Violence attributed to online hate speech has increased worldwide. For example, in the UK, there has been a significant increase in hate speech towards the immigrant and Muslim communities following the UK's leaving the EU and the Manchester and London attacks. The US also has been a marked increase in hate speech and related crime following the Trump election. Therefore, governments and social network platforms confronting the trend must have tools to detect aggressive behavior in general, and hate speech in particular, as these forms of online aggression not only poison the social climate of the online communities that experience it, but can also provoke physical violence and serious harm BIBREF1.
Recently, the problem of online abusive detection has attracted scientific attention. Proof of this is the creation of the third Workshop on Abusive Language Online or Kaggle’s Toxic Comment Classification Challenge that gathered 4,551 teams in 2018 to detect different types of toxicities (threats, obscenity, etc.). In the scope of this work, we mainly focus on the term hate speech as abusive content in social media, since it can be considered a broad umbrella term for numerous kinds of insulting user-generated content. Hate speech is commonly defined as any communication criticizing a person or a group based on some characteristics such as gender, sexual orientation, nationality, religion, race, etc. Hate speech detection is not a stable or simple target because misclassification of regular conversation as hate speech can severely affect users’ freedom of expression and reputation, while misclassification of hateful conversations as unproblematic would maintain the status of online communities as unsafe environments BIBREF2.
To detect online hate speech, a large number of scientific studies have been dedicated by using Natural Language Processing (NLP) in combination with Machine Learning (ML) and Deep Learning (DL) methods BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF0. Although supervised machine learning-based approaches have used different text mining-based features such as surface features, sentiment analysis, lexical resources, linguistic features, knowledge-based features or user-based and platform-based metadata BIBREF8, BIBREF9, BIBREF10, they necessitate a well-defined feature extraction approach. The trend now seems to be changing direction, with deep learning models being used for both feature extraction and the training of classifiers. These newer models are applying deep learning approaches such as Convolutional Neural Networks (CNNs), Long Short-Term Memory Networks (LSTMs), etc.BIBREF6, BIBREF0 to enhance the performance of hate speech detection models, however, they still suffer from lack of labelled data or inability to improve generalization property.
Here, we propose a transfer learning approach for hate speech understanding using a combination of the unsupervised pre-trained model BERT BIBREF11 and some new supervised fine-tuning strategies. As far as we know, it is the first time that such exhaustive fine-tuning strategies are proposed along with a generative pre-trained language model to transfer learning to low-resource hate speech languages and improve performance of the task. In summary:
We propose a transfer learning approach using the pre-trained language model BERT learned on English Wikipedia and BookCorpus to enhance hate speech detection on publicly available benchmark datasets. Toward that end, for the first time, we introduce new fine-tuning strategies to examine the effect of different embedding layers of BERT in hate speech detection.
Our experiment results show that using the pre-trained BERT model and fine-tuning it on the downstream task by leveraging syntactical and contextual information of all BERT's transformers outperforms previous works in terms of precision, recall, and F1-score. Furthermore, examining the results shows the ability of our model to detect some biases in the process of collecting or annotating datasets. It can be a valuable clue in using pre-trained BERT model for debiasing hate speech datasets in future studies.
Our experiment results show that using the pre-trained BERT model and fine-tuning it on the downstream task by leveraging syntactical and contextual information of all BERT's transformers outperforms previous works in terms of precision, recall, and F1-score.
|
Does it bring good performance by using the pre-trained model and fine-tuning it on the downstream task by leveraging syntactical and contextual information of all BERT's transformers?
|
Yes.
|
null | false
| null |
Who are Daenerys Targaryen's dragons named after?
|
Drogon is named for her late husband, Khal Drogo, and Viserion and Rhaegal are named for her late brothers, Viserys and Rhaegar.
|
|
null | false
| null |
The New York Hotel in Salt Lake City, Utah, at 42 Post Office Pl., was built in 1906. It was listed on the National Register of Historic Places in 1980.
It was a work of leading Salt Lake City architect Richard K.A. Kletting.
It is a three-story brick building designed to have shops on the ground floor and 62 hotel rooms above. Some hotel room suites had bathrooms; there were also single rooms served by a bathroom on each floor. It was "completely modern" in 1906, having both steam heat and electric lights.
The building is 49 feet (15 m) tall; the first/second/third floors are 14 feet (4.3 m), 10 feet (3.0 m), and 10.5 feet (3.2 m) tall, respectively.
It was built as a luxury hotel for Orange J. Salisbury, a mining engineer and businessman who obtained patents and started the United Filter Corporation.
|
Is there a New York Hotel in Salt Lake City
|
The New York Hotel in Salt Lake City, Utah, at 42 Post Office Pl., was built in 1906. It was listed on the National Register of Historic Places in 1980.
It was a work of leading Salt Lake City architect Richard K.A. Kletting.
It is a three-story brick building designed to have shops on the ground floor and 62 hotel rooms above. Some hotel room suites had bathrooms; there were also single rooms served by a bathroom on each floor. It was "completely modern" in 1906, having both steam heat and electric lights.
The building is 49 feet (15 m) tall; the first/second/third floors are 14 feet (4.3 m), 10 feet (3.0 m), and 10.5 feet (3.2 m) tall, respectively.
It was built as a luxury hotel for Orange J. Salisbury, a mining engineer and businessman who obtained patents and started the United Filter Corporation.
|
null | false
| null |
I am getting married, give me the list of the main tasks I need to do to organise the event.
|
Congratulations on getting married! Here is the list of some of the main tasks you need to do for your special day:
- Choose the wedding date.
- Create the guest list.
- Choose the location of the ceremony and of the wedding reception.
- Choose the honeymoon destination.
- Choose the style of the decoration of your day.
- Choose the wedding dress or suit.
- Find the best suppliers.
- Organize all the bureaucracy of the wedding.
- Choose the wedding menu.
- Decide how to seat guests.
- Choose the groomsmen, witnesses, bridesmaids and best man.
- Choose the wedding cake.
- Buy your wedding rings.
- Find all the accessories for the bridal look.
- Close the guest list.
- Book barber and beauty center.
- Prepare the emergency kit.
- Prepare the wedding night suitcase.
- Prepare the honeymoon suitcase.
- Go get the bridal bouquet.
- Relax and enjoy the last singles hours.
|
|
null | false
| null |
What is a verb?
|
A verb is a word that describes what the subject of a sentence is doing. Verbs can indicate (physical or mental) actions, occurrences, and states of being.
|
|
null | false
| null |
Who are the best hitters in major league baseball I've never seen?
|
Understanding your birth date is in 1973, there are hundreds of great hitters that played before you were born. In the 1940's you had Joe Dimaggio and Ted Williams. The 1950's featured Mickey Mantle and Stan Musial. By the time the 1960's started Roger Maris and Willie Mays were leading their leagues in hitting. This is just a short list of great hitters that played before you were born.
|
|
null | false
| null |
Is The Last Of Us Part I a good PC game?
|
Yes and no. The Last Of US franchise has great gameplay and story telling. It was released on Playstation in 2013 and received overwhelmingly positive review. However the recent port from Playstation to PC was a disaster. It's plagued with software bugs like constant crashing and stuttering. Even the latest generation PC hardware cannot handle it well because it's poorly optimized. That said, the game is being patched regularly and may improve in the future.
|
|
null | false
| null |
The Boston Marathon began on 19 April 1897, and was inspired by the success of the first marathon competition in the 1896 Summer Olympics. It is the world's oldest run annual marathon, and ranks as one of the world's most prestigious road racing events. Its course runs from Hopkinton in southern Middlesex County, to Copley Square in Boston. Johnny Hayes' victory at the 1908 Summer Olympics also contributed to the early growth of long-distance running and marathoning in the United States. Later that year, races around the holiday season including the Empire City Marathon held on New Year's Day 1909 in Yonkers, New York, marked the early running craze referred to as "marathon mania". Following the 1908 Olympics, the first five amateur marathons in New York City were held on days that held special meanings: Thanksgiving Day, the day after Christmas, New Year's Day, Washington's Birthday, and Lincoln's Birthday.
Frank Shorter's victory in the marathon at the 1972 Summer Olympics would spur national enthusiasm for the sport more intense than that which followed Hayes' win 64 years earlier. In 2014, an estimated 550,600 runners completed a marathon within the United States. This can be compared to 143,000 in 1980. Today marathons are held all around the world on a nearly weekly basis.
|
What is the world's oldest annual marathon based on the reference text below?
|
The Boston Marathon is the world's oldest annual marathon, beginning on April 19th 1897.
|
null | false
| null |
Scheme is a dialect of the Lisp family of programming languages. Scheme was created during the 1970s at the MIT AI Lab and released by its developers, Guy L. Steele and Gerald Jay Sussman, via a series of memos now known as the Lambda Papers. It was the first dialect of Lisp to choose lexical scope and the first to require implementations to perform tail-call optimization, giving stronger support for functional programming and associated techniques such as recursive algorithms. It was also one of the first programming languages to support first-class continuations. It had a significant influence on the effort that led to the development of Common Lisp.
|
Who created the scheme programming language?
|
The scheme programming language was developed by Guy L. Steele and Gerald Jay Sussman during the 1970s at the MIT AI Lab in a series of memos known as the Lambda Papers.
|
null | false
| null |
Why are lunar eclipses more common than solar?
|
The Earth is bigger and casts a large shadow on the moon and can be visible during night. So, more people see it and don't need to specifically be in the darkest part of the shadow
|
|
null | false
| null |
The Australian Open is a tennis tournament held annually at Melbourne Park in Melbourne, Victoria, Australia. The tournament is the first of the four Grand Slam tennis events held each year, preceding the French Open, Wimbledon, and the US Open. The Australian Open starts in the middle of January and continues for two weeks coinciding with the Australia Day holiday. It features men's and women's singles; men's, women's, and mixed doubles; junior's championships; and wheelchair, legends, and exhibition events. Before 1988, it was played on grass courts, but since then two types of hardcourt surfaces have been used: green-coloured Rebound Ace up to 2007 and blue Plexicushion since 2008.
|
What is the Australian Open?
|
The Australian Open is an annual tennis tournament held at Melbourne Park in Melbourne, Victoria, Australia. It is one of the four Grand Slam tennis events, alongside the French Open, Wimbledon and the US Open.
|
null | false
| null |
Is FreeBSD a Linux distribution?
|
No. FreeBSD is complete operating system that descends from the original Unix system through the Berkeley Software Distribution (BSD) development-line. Linux is a family of operating system distributions packaged by third parties, using the Linux Kernel. The licensing of FreeBSD is also available under a permissive BSD license while Linux is uses a copy-left based GPL license.
|
|
null | false
| 29
|
MLP contains three layers - an input layer, an output layer and some hidden layers. Input layer receives the signal, the output layer gives a decision or prediction about the input and the computation of the MLP is conducted in the hidden layers. In our system, we use 100 layers. For weight optimization, we use Limited-memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS) optimization algorithm.
SVM gives an optimal hyper-plane and it maximizes the margin between classes. We use Radial Basis Function (RBF) kernel in our system to make decision boundary curve-shaped. For decision function shape, we use the original one-vs-one (ovo) decision function.
NBC is based on Bayes' Theorem which gives probability of an event occurrence based on some conditions related to that event. We use Multinomial Naive Bayes Classifier with smoothing parameter equals to 0.1. A zero probability cancels the effects of all the other probabilities.
Stochastic gradient descent optimizes an objective function with suitable smoothness properties BIBREF27. It selects few examples randomly instead of whole data for each iteration. We use 'L2' regularization for reduction of overfitting.
Gradient Boosting Classifier produces a prediction model consisting of weak prediction models. Gradient boosting uses decision trees. We use 100 boosting stages in this work.
K-NN is a supervised classification and regression algorithm. It uses the neighbours of the given sample to identify its class. K determines the number of neighbours needed to be considered. We set the value of K equals to 13 in this work.
RF is an ensemble learning technique. It constructs large number of decision trees during training and then predicts the majority class. We use 500 decision trees in the forest and "entropy" function to measure the quality of a split.
We use 100 boosting stages in this work.
|
How many boosting stages do the authors use in this work?
|
100 boosting stages.
|
null | false
| 448
|
Prior work has suggested that persistent learning is the most effective method for learning a valid EBM density. Furthermore, works that learn a valid density have avoided rejuvenation because the incorporation of newly rejuvenated samples into the persistent bank ensures that EBM updates will always include samples that are not at the steady-state. However, persistent learning without rejuvenation has shortcomings mentioned in Section 3. We present hypothesized conditions for learning a valid density that motivate the design of our MCMC initialization:
• After a certain point in training, all samples used to update the EBM must be approximate steadystate samples of the current model p(x; θ). • Persistent samples that are newly rejuvenated (up to about 50K Langevin steps since rejuvenation, and possibly many more) cannot be approximate steady-state samples for any known rejuvenation sources, including data, generators, and noise. • Persistent samples that have undergone sufficiently many lifetime Langevin updates for a model whose weights are changing very slowly can be approximate steady-state samples.
Figure: Visualization of our longrun initializatin procedure. Newly rejuvenated samples must remain the the burnin bank until they have approach the model steady-state, at which point they move to the update bank to be used for model gradients.
Both the second and third point are corroborated by prior work as well as our own observations. The third point means that persistent states updated with shortrun Langevin can eventually act as longrun Langevin samples if the optimizer learning rate is small, because the EBM samples in previous timesteps are essentially samples from the current EBM.
Learning a valid density that includes rejuvenation while satisfying the conditions above requires separating the newly rejuvenated samples from samples that are used to update the EBM. This leads us to introduce two persistent image banks: one for newly rejuvenated samples, and one for samples that will be used to update the EBM. Samples in the newly rejuvenated bank that have been updated sufficiently many times will eventually replace samples from the bank used to update the EBM, at which point newly rejuvenated states will be added to the burn-in bank. Figure shows a visualization of the MCMC initialization method. As in Section 3, we will use a pre-trained generator to efficiently obtain high-quality rejuvenated samples so that the generation process is fully synthetic, although data samples could be used as well. Our goal is to preserve the sample quality for an arbitrary number of MCMC steps. We note that this is a sufficient but not necessary condition for learning a valid EBM density. Nonetheless, stable sampling is an important step towards rigorous probabilistic EBMs. Even with our improved initialization, we find that extremely longrun trajectories of 1 million or more MCMC steps still tend to oversaturate, although to a much lesser degree. To further stabilize the appearance of extremely longrun Langevin samples, we include prior energy terms in the model. Our longrun EBMs have the form
where U (x; θ) is the model whose weights are updated, U 0 (x; θ 0 ) is a prior EBM with fixed weights θ 0 and σ is a parameter controlling the strength of a Gaussian prior. We used a prior EBM in a shortrun manner. The role of the prior EBM is to provide some stability but also to provide a tendency to oversaturate at longer trajectories so that the current EBM learns to correct oversaturation. The Gaussian prior is meant to discourage unbounded activations outside of the image hypercube. Further discussion is in Appendix H. We find that including both of these terms significantly improves the ability to learn quality synthesis over long trajectories.
As in Section 3, we will use a pre-trained generator to efficiently obtain high-quality rejuvenated samples so that the generation process is fully synthetic, although data samples could be used as well. Our goal is to preserve the sample quality for an arbitrary number of MCMC steps. We note that this is a sufficient but not necessary condition for learning a valid EBM density.****Figure 9: Left: MCMC samples after 100K steps using a GLOW model (Kingma & Dhariwal,2018) trained on CIFAR-10. Right: MCMC samples after 100K steps using a conditional recovery likelihood model (Gao et al., 2020b) trained on CIFAR-10. MCMC samples were initialized from data samples. Neither model can correctly approximate the distribution of probability mass for the data density. The problem of steady-state misalignment extends beyond EBMs to many other generative density models. We tried several different temperatures close to 1 for the GLOW model and found equivalent results.
|
How do the results compare when using data samples instead of a pre-trained generator?
|
We will include these results in future versions for both midrun and longrun learning. The differences are negligible.
|
null | false
| null |
Plan for a three day road trip for May long weekend from Westfield, New Jersey
|
May 26: Drive to Montreal from Westfield, NJ. Time taken 6 hrs. Stay there overnight
May 27: have a quick breakfast..checkout and drive to Mount Tremblant. Time: 90 minutes
May 27: Hike Grand Brule ( red path) on the way up and green path on the way down
May 27: Stroll around the Tremblant village
May 28: Drive to Quebec City. Time taken: 4 hrs
May 28: Explore the city, aquarium and go for wildlife watching
May 29: Drive to Montreal. Spend time exploring the city
May 30: Drive back to Westfield, New Jersey
|
|
null | false
| null |
Which MLB baseball team has the most World Series wins?
|
The World Series is the annual championship series of Major League Baseball (MLB) and concludes the MLB postseason. First played in 1903, the World Series championship is a best-of-seven playoff and is a contest between the champions of baseball's National League (NL) and American League (AL). Often referred to as the "Fall Classic", the modern World Series has been played every year since 1903 with two exceptions: in 1904, when the NL champion New York Giants declined to play the AL champion Boston Americans; and in 1994, when the series was canceled due to the players' strike.
The best-of-seven style has been the format of all World Series except in 1903, 1919, 1920, and 1921, when the winner was determined through a best-of-nine playoff. Although the large majority of contests have been played entirely during the month of October, a small number of Series have also had games played during September and November. The Series-winning team is awarded the Commissioner's Trophy. Players, coaches and others associated with the team are generally given World Series rings to commemorate their victory; however, they have received other items such as pocket watches and medallions in the past. The winning team is traditionally invited to the White House to meet the President of the United States.
A total of 118 World Series have been contested through 2022, with the AL champion winning 67 and the NL champion winning 51. The New York Yankees of the AL have played in 40 World Series, winning 27 – the most championship appearances and most victories by any team amongst the major North American professional sports leagues. The Dodgers of the NL have the most losses with 14, while the Yankees have the most losses among AL teams with 13. The St. Louis Cardinals have won 11 championships, the most championships among NL clubs and second-most all-time behind the Yankees, and have made 19 total appearances, third-most among NL clubs. The Dodgers have represented the NL the most in the World Series with 21 appearances.
The Seattle Mariners are the only current MLB franchise that has never appeared in a World Series; the San Diego Padres, Colorado Rockies, Texas Rangers, Tampa Bay Rays, and Milwaukee Brewers have all played in the Series but have never won it. The Los Angeles Angels, Arizona Diamondbacks and Washington Nationals are the only teams who have won their only World Series appearance, and the Toronto Blue Jays and the Miami Marlins have won both of their World Series appearances. The Toronto Blue Jays are the only franchise from outside the United States to appear in and win a World Series, winning in 1992 and 1993. The Houston Astros are the only franchise to have represented both the NL (2005) and the AL (2017, 2019, 2021, 2022), winning the Series in 2017 and 2022.
The current (2022) World Series champions are the Houston Astros.
|
|
null | false
| null |
The word cookie dates from at least 1701 in Scottish usage where the word meant "plain bun", rather than thin baked good, and so it is not certain whether it is the same word. From 1808, the word "cookie" is attested "...in the sense of "small, flat, sweet cake" in American English. The American use is derived from Dutch koekje "little cake," which is a diminutive of "koek" ("cake"), which came from the Middle Dutch word "koke". Another claim is that the American name derives from the Dutch word koekje or more precisely its informal, dialect variant koekie which means little cake, and arrived in American English with the Dutch settlement of New Netherland, in the early 1600s.
|
What is the etymology of the word cookie?
|
The earliest known usage of the work cookie comes from 1701 in Scotland, where it meant "plain bun". Starting in 1808 the term was associated with small, flat, sweet cakes as part of American English, which is believed to be derived from the Dutch word keokie.
|
null | false
| null |
Individuals whom the media refers to as members of the PayPal Mafia include:
Peter Thiel, PayPal founder and former chief executive officer who is sometimes referred to as the "don" of the PayPal Mafia
Max Levchin, founder and chief technology officer at PayPal
Elon Musk, co-founder of Zip2, founder of X.com which merged with Confinity to form PayPal. Musk later founded SpaceX, co-founded Tesla, Inc., co-founded OpenAI, Neuralink, founded The Boring Company, and became owner of Twitter, Inc.
David O. Sacks, former PayPal COO who later founded Geni.com and Yammer
Scott Banister, early advisor and board member at PayPal.
Roelof Botha, former PayPal CFO who later became a partner and Senior Steward of venture capital firm Sequoia Capital
Steve Chen, former PayPal engineer who co-founded YouTube.
Reid Hoffman, former executive vice president who later founded LinkedIn and was an early investor in Facebook, Aviary
Ken Howery, former PayPal CFO who became a partner at Founders Fund
Chad Hurley, former PayPal web designer who co-founded YouTube
Eric M. Jackson, who wrote the book The PayPal Wars and became chief executive officer of WND Books and co-founded CapLinked
Jawed Karim, former PayPal engineer who co-founded YouTube
Jared Kopf, former PayPal (executive assistant to Peter Thiel) who co-founded Slide, HomeRun and NextRoll
Dave McClure, former PayPal marketing director, a super angel investor for start up companies
Andrew McCormack, co-founder of Valar Ventures
Luke Nosek, PayPal co-founder and former vice president of marketing and strategy, became a partner at Founders Fund with Peter Thiel and Ken Howery
Keith Rabois, former executive at PayPal who later worked at LinkedIn, Slide, Square, Khosla Ventures, and currently with Peter Thiel at Founders Fund, and personally invested in Tokbox, Xoom, Slide, LinkedIn, Geni, Room 9 Entertainment, YouTube, and Yelp
Jack Selby, former vice president of corporate and international development at PayPal who co-founded Clarium Capital with Peter Thiel, later becoming managing director of Grandmaster Capital Management
Premal Shah, former product manager at PayPal, became the founding president of Kiva.org
Russel Simmons, former PayPal engineer who co-founded Yelp Inc.
Jeremy Stoppelman, former vice president of technology at PayPal who later co-founded Yelp
Yishan Wong, former engineering manager at PayPal, later worked at Facebook and became the CEO of Reddit
|
How was Elon Musk involved in Paypal?
|
Elon Musk founded X.com, which merged with Confinity to form Paypal.
|
null | false
| null |
Give me different ways I can travel from one place to another
|
mode of transport can involve car, bus, cycle, airplane along with walk, run, swim, crawl and many more.
|
|
null | false
| null |
What are popular young adult books?
|
Young adult novels typically tell a coming to age story with a main character in their teens or early adulthood. From 2000-2015, common books for young adults are Harry Potter, Divergent, and The Hunger Games. Since them, young adult novels are more often showing themes of diversity, inclusion, and overcoming adversity and less focused on action-based storytelling. A popular graphic novel series, Heartstopper, is a good example of a popular book focused on DEI. This eventually turned into a high-profile television show on Netflix.
|
|
null | false
| null |
Type 2 diabetes, formerly known as adult-onset diabetes, is a form of diabetes mellitus that is characterized by high blood sugar, insulin resistance, and relative lack of insulin.
|
What is Type 2 Diabetes?
|
Type 2 diabetes, formerly known as adult-onset diabetes, is a form of diabetes mellitus that is characterized by high blood sugar, insulin resistance, and relative lack of insulin. Common symptoms include increased thirst, frequent urination, and unexplained weight loss. Symptoms may also include increased hunger, feeling tired, and sores (wounds) that do not heal. Often symptoms come on slowly. Long-term complications from high blood sugar include heart disease, strokes, diabetic retinopathy which can result in blindness, kidney failure, and poor blood flow in the limbs which may lead to amputations. The sudden onset of hyperosmolar hyperglycemic state may occur; however, ketoacidosis is uncommon.
|
null | false
| null |
Choose the word which is different from the rest: Trunk, Tree, Fruit, Leaf, Flower
|
All others are parts of a tree.
|
|
null | false
| null |
Best food in the world
|
Biriyani
|
|
null | false
| 393
|
To assess the performance of binary paragraph vectors, we carried out experiments on three datasets: 20 Newsgroups, a cleansed version (also called v2) of Reuters Corpus Volume 1 (RCV1) and English Wikipedia. As paragraph vectors can be trained with relatively large vocabularies, we did not perform any stemming of the source text. However, we removed stop words as well as words shorter than two characters and longer than 15 characters. Results reported by BIBREF15 indicate that performance of PV-DBOW can be improved by including n-grams in the model. We therefore evaluated two variants of Binary PV-DBOW: one predicting words in documents and one predicting words and bigrams. Since 20 Newsgroups is a relatively small dataset, we used all words and bigrams from its documents. This amounts to a vocabulary with slightly over one million elements. For the RCV1 dataset we used words and bigrams with at least 10 occurrences in the text, which gives a vocabulary with approximately 800 thousands elements. In case of English Wikipedia we used words and bigrams with at least 100 occurrences, which gives a vocabulary with approximately 1.5 million elements.
The 20 Newsgroups dataset comes with reference train/test sets. In case of RCV1 we used half of the documents for training and the other half for evaluation. In case of English Wikipedia we held out for testing randomly selected 10% of the documents. We perform document retrieval by selecting queries from the test set and ordering other test documents according to the similarity of the inferred codes. We use Hamming distance for binary codes and cosine similarity for real-valued representations. Results are averaged over queries. We assess the performance of our models with precision-recall curves and two popular information retrieval metrics, namely mean average precision (MAP) and the normalized discounted cumulative gain at the 10th result (NDCG@10) BIBREF16 . The results depend, of course, on the chosen document relevancy measure. Relevancy measure for the 20 Newsgroups dataset is straightforward: a retrieved document is relevant to the query if they both belong to the same newsgroup. In RCV1 each document belongs to a hierarchy of topics, making the definition of relevancy less obvious. In this case we adopted the relevancy measure used by BIBREF3 . That is, the relevancy is calculated as the fraction of overlapping labels in a retrieved document and the query document. Overall, our selection of test datasets and relevancy measures for 20 Newsgroups and RCV1 follows BIBREF3 , enabling comparison with semantic hashing codes. To assess the relevancy of articles in English Wikipedia we can employ categories assigned to them. However, unlike in RCV1, Wikipedia categories can have multiple parent categories and cyclic dependencies. Therefore, for this dataset we adopted a simplified relevancy measure: two articles are relevant if they share at least one category. We also removed from the test set categories with less than 20 documents as well as documents that were left with no categories. Overall, the relevancy is measured over more than INLINEFORM0 categories, making English Wikipedia harder than the other two benchmarks.
We use AdaGrad BIBREF17 for training and inference in all experiments reported in this work. During training we employ dropout BIBREF18 in the embedding layer. To facilitate models with large vocabularies, we approximate the gradients with respect to the softmax logits using the method described by BIBREF9 . Binary PV-DM networks use the same number of dimensions for document codes and word embeddings.
Performance of 128- and 32-bit binary paragraph vector codes is reported in Table TABREF8 and in Figure FIGREF7 . For comparison we also report performance of real-valued paragraph vectors. Note that the binary codes perform very well, despite their far lower capacity: on 20 Newsgroups and RCV1 the 128-bit Binary PV-DBOW trained with bigrams approaches the performance of the real-valued paragraph vectors, while on English Wikipedia its performance is slightly lower. Furthermore, Binary PV-DBOW with bigrams outperforms semantic hashing codes: comparison of precision-recall curves from Figures FIGREF7 a and FIGREF7 b with BIBREF3 shows that 128-bit codes learned with this model outperform 128-bit semantic hashing codes on 20 Newsgroups and RCV1. Moreover, the 32-bit codes from this model outperform 128-bit semantic hashing codes on the RCV1 dataset, and on the 20 Newsgroups dataset give similar precision up to approximately 3% recall and better precision for higher recall levels. Note that the difference in this case lies not only in retrieval precision: the short 32-bit Binary PV-DBOW codes are more efficient for indexing than long 128-bit semantic hashing codes.
We also compared binary paragraph vectors against codes constructed by first inferring short, real-valued paragraph vectors and then using a separate hashing algorithm for binarization. When the dimensionality of the paragraph vectors is equal to the size of binary codes, the number of network parameters in this approach is similar to that of Binary PV models. We experimented with two standard hashing algorithms, namely random hyperplane projection BIBREF19 and iterative quantization BIBREF20 . Paragraph vectors in these experiments were inferred using PV-DBOW with bigrams. Results reported in Table TABREF9 show no benefit from using a separate algorithm for binarization. On the 20 Newsgroups and RCV1 datasets Binary PV-DBOW yielded higher MAP than the two baseline approaches. On English Wikipedia iterative quantization achieved MAP equal to Binary PV-DBOW, while random hyperplane projection yielded lower MAP. Some gain in precision of top hits can be observed for iterative quantization, as indicated by NDCG@10. However, precision of top hits can also be improved by querying with Real-Binary PV-DBOW model (Section SECREF15 ). It is also worth noting that end-to-end inference in Binary PV models is more convenient than inferring real-valued vectors and then using another algorithm for hashing.
BIBREF15 argue that PV-DBOW outperforms PV-DM on a sentiment classification task, and demonstrate that the performance of PV-DBOW can be improved by including bigrams in the vocabulary. We observed similar results with Binary PV models. That is, including bigrams in the vocabulary usually improved retrieval precision. Also, codes learned with Binary PV-DBOW provided higher retrieval precision than Binary PV-DM codes. Furthermore, to choose the context size for the Binary PV-DM models, we evaluated several networks on validation sets taken out of the training data. The best results were obtained with a minimal one-word, one-sided context window. This is the distributed memory architecture most similar to the Binary PV-DBOW model.
Note that the binary codes perform very well, despite their far lower capacity: on 20 Newsgroups and RCV1 the 128-bit Binary PV-DBOW trained with bigrams approaches the performance of the real-valued paragraph vectors, while on English Wikipedia its performance is slightly lower.
|
What is the performance of the binary codes?
|
Note that the binary codes perform very well, despite their far lower capacity: on 20 Newsgroups and RCV1 the 128-bit Binary PV-DBOW trained with bigrams approaches the performance of the real-valued paragraph vectors, while on English Wikipedia its performance is slightly lower.
|
null | false
| 361
|
Neural Networks have proven to be useful for automating tasks such as question answering, system response, and language generation considering large textual datasets. In learning systems, bias can be defined as the negative consequences derived by the implicit association of patterns that occur in a high-dimensional space. In dialogue systems, these patterns represent associations between word embeddings that can be measured by a Cosine distance to observe male- and female-related analogies that resemble the gender stereotypes of the real world. We propose an automatic technique to mitigate bias in language generation models based on the use of an external memory in which word embeddings are associated to gender information, and they can be sparsely updated based on content-based lookup.
The main contributions of our work are the following:
We introduce a novel architecture that considers the notion of a Fair Region to update a subset of the trainable parameters of a Memory Network.
We experimentally show that this architecture leads to mitigate gender bias amplification in the automatic generation of text when extending the Sequence2Sequence model.
We propose an automatic technique to mitigate bias in language generation models based on the use of an external memory in which word embeddings are associated to gender information, and they can be sparsely updated based on content-based lookup.
|
What technique do they propose?
|
An automatic technique to mitigate bias in language generation models based on the use of an external memory in which word embeddings are associated to gender information.
|
null | false
| 160
|
In this work, we introduce a meta neural network, which can generate a compositional network to dynamically compose constituents over tree structure. The parameters of compositional function vary from position to position and from sample to sample, allowing for more sophisticated operations on the input.
To evaluate our models, we choose two typical NLP tasks involving six datasets. The qualitative and quantitative experiment results demonstrate the effectiveness of our models.
To evaluate our models, we choose two typical NLP tasks involving six datasets. The qualitative and quantitative experiment results demonstrate the effectiveness of our models
|
Can the model proposed by the author solve the problem effectively through experiments?
|
Yes.
|
1909.08824
| false
| null |
We compared our proposed model with the following four baseline methods:
RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.
Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.
VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.
CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.
Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.
FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened.
We compared our proposed model with the following four baseline methods:
RNN-based Seq2Seq proposed by BIBREF4 (BIBREF4) for the If-Then reasoning on Atomic.
Variational Seq2Seq combines a latent variable with the encoder-decoder structure through converting the last hidden state of RNN encoder into a Gaussian distributed latent variable BIBREF8.
VRNMT Propose by BIBREF19 (BIBREF19), VRNMT combines CVAE with attention-based encoder-decoder framework through introduces a latent variable to model the semantic distribution of targets.
CWVAE-Unpretrained refers to the CWVAE model without the pretrain stage.
Note that, for each baseline method, we train distinct models for each distinct inference dimension, respectively.
FLOAT SELECTED: Table 6: Average perplexity and BLEU scores (reported in percentages) for the top 10 generations under each inference dimension of Atomic. The the best result for each dimension is emboldened.
|
Which models do they use as baselines on the Atomic dataset?
|
The answers are shown as follows:
* RNN-based Seq2Seq
* Variational Seq2Seq
* VRNMT
* CWVAE-Unpretrained
|
null | false
| null |
The Mandalorian is an American space Western television series created by Jon Favreau for the streaming service Disney+. It is the first live-action series in the Star Wars franchise, beginning five years after the events of Return of the Jedi (1983), and stars Pedro Pascal as the title character, a lone bounty hunter who goes on the run to protect the Force-sensitive child Grogu. Starting in the third season, Katee Sackhoff also stars as Mandalorian Bo-Katan Kryze, following her appearance in the second season.
|
From the text, how long after the events of Return of the Jedi is the Mandalorian series set? Convert years into months, and use letters to spell out the answer.
|
sixty
|
null | false
| null |
Are alpacas and llamas the same thing?
|
No, alpacas and llamas are related species but are not the same species. Both are natives of South America, but are slightly different sizes and have different physical features
|
|
null | false
| null |
Which is a species of fish? Lung or Lungfish
|
Lungfish
|
|
null | false
| null |
Identify which instrument is string or percussion: Fiddlesticks, Tricordia
|
Tricordia is string, Fiddlesticks is percussion.
|
|
null | false
| null |
Howl's Moving Castle is a fantasy novel by British author Diana Wynne Jones, first published in 1986 by Greenwillow Books of New York. It was a runner-up for the annual Boston Globe–Horn Book Award, and won the Phoenix Award twenty years later. It was adapted into a critically acclaimed 2004 animated film of the same name, which was nominated for the Academy Award for Best Animated Feature.
|
Who wrote Howl's Moving Castle?
|
Howl's Moving Castle is a fantasy novel by British author Diana Wynne Jones.
|
1611.09441
| false
| null |
In this report we have presented a sentiment analysis tool for Twitter posts. We have discussed the characteristics of Twitter that make existing sentiment analyzers perform poorly. The model proposed in this report has addressed the challenges by using normalization methods and features specific to this media. We show that using external knowledge outside the tweet text (from landing pages of URLs) and user features can significantly improve performance. We have presented experimental results and comparison with state-of-the-art tools.
We show that using external knowledge outside the tweet text (from landing pages of URLs) and user features can significantly improve performance.
|
What external sources of information are used?
|
The answers are shown as follows:
* landing pages of URLs
|
null | false
| null |
What is the biggest dog breed in the world?
|
The great dane, originated from Germany. It is one of the two largest dog breeds together with the Irish Wolfhound.
|
|
null | false
| 56
|
Predicting stock market movements is a well-known problem of interest. Now-a-days social media is perfectly representing the public sentiment and opinion about current events. Especially, twitter has attracted a lot of attention from researchers for studying the public sentiments. Stock market prediction on the basis of public sentiments expressed on twitter has been an intriguing field of research. Previous studies have concluded that the aggregate public mood collected from twitter may well be correlated with Dow Jones Industrial Average Index (DJIA). The thesis of this work is to observe how well the changes in stock prices of a company, the rises and falls, are correlated with the public opinions being expressed in tweets about that company. Understanding author's opinion from a piece of text is the objective of sentiment analysis. The present paper have employed two different textual representations, Word2vec and N-gram, for analyzing the public sentiments in tweets. In this paper, we have applied sentiment analysis and supervised machine learning principles to the tweets extracted from twitter and analyze the correlation between stock market movements of a company and sentiments in tweets. In an elaborate way, positive news and tweets in social media about a company would definitely encourage people to invest in the stocks of that company and as a result the stock price of that company would increase. At the end of the paper, it is shown that a strong correlation exists between the rise and falls in stock prices with the public sentiments in tweets.
Previous studies have concluded that the aggregate public mood collected from twitter may well be correlated with Dow Jones Industrial Average Index (DJIA). The thesis of this work is to observe how well the changes in stock prices of a company, the rises and falls, are correlated with the public opinions being expressed in tweets about that company.
|
What does the paper investigate?
|
Investigate the correlation of sentiments extracted from Twitter with the Dow Jones Industrial Average (DJIA).
|
null | false
| null |
What is orange wine?
|
Orange wine is a term given to white wines that have "skin contact" during the wine making process. Unlike true white wines, where contact between the pressed white grape juice and the white grape skins is minimised, for an orange wine the pressed juice will macerate with the skins, similar to the process for making a red wine. The depth of colour and the level of tannin present in the resulting wine will be determined, largely, by the length of this maceration. Due to the presence of tannin, orange wines can often taste "funky" compared to white wines, and are very popular with proponents of the natural wine movement.
|
|
null | false
| null |
Devon Dikeou (born 1963) is an American artist, publisher, and art collector. Dikeou’s practice investigates the "in-between" — subtle interactions between artist, art object, viewer, space, and context.
Life and education
Devon Dikeou is originally from Denver, Colorado. She received a B.A. from Brown University in 1986 and an M.F.A. from the School of Visual Arts in 1988.
Since the start of her career in the early 1990s, Dikeou has received several artist awards and nominations including The Sue Canon Award from the Museum of Contemporary Art Denver (2007), Assistance League of Houston (2011 and 2013), Santo Foundation (2012), and the DAM Key Award from Denver Art Museum (2017).
Exhibitions
Devon Dikeou, "What's Love Got To Do With It: From Sculpture," 1991-ongoing
Dikeou has exhibited at numerous international museums, foundations, and galleries including the New Museum, The Contemporary Austin, Boulder Museum of Contemporary Art, Artpace San Antonio, Tricia Collins Grand Salon, James Fuentes Gallery in 2017 and 2018, 179 Canal, Kai Matsumiya, and Postmasters. She has also shown at art fairs, including Art Basel Statements, The Armory Show, Independent, NADA Miami Beach, and NADA New York.
Dikeou's mid-career retrospective "Mid-Career Smear" opened at The Dikeou Collection in February 2020.
zingmagazine
zingmagazine issue 23
Devon Dikeou published the first issue of zingmagazine in 1995. zingmagazine is composed of rotating multi-page curatorial projects organized by artists and arts professionals.
The Dikeou Collection
zingmagazine
In 1998, Devon Dikeou founded the Dikeou Collection along with her brother Pany Dikeou. The Dikeou Collection is a private contemporary art collection in downtown Denver, Colorado, and features work from over 40 international artists and functions as extension of zingmagazine.
|
List all the awards received by Devon Dikeou in chronological order
|
The awards received by Devon Dikeou are as follows:
- The Sue Canon Award from the Museum of Contemporary Art Denver (2007)
- Assistance League of Houston (2011)
- Santo Foundation (2012)
- Assistance League of Houston (2013)
- DAM Key Award from Denver Art Museum (2017)
|
null | false
| null |
Classify the following as either Australian flora or fauna: kangaroo, wattle, emu, possum, echidna, kookaburra, macadamia, eucalyptus.
|
Kangaroo, emu, possum and echidna are Australian fauna. Wattle, macadamia and eucalyptus are Australian flora.
|
|
null | false
| 226
|
The goal of multi-document summarization (MDS) is to automatically generate a brief, well-organized summary for a topic which describes an event with a set of documents from different sources. BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . In the typical setting of MDS, the input is a set of news documents about the same topic. The output summary is a piece of short text document containing several sentences, generated only based on the input original documents.
With the development of social media and mobile equipments, more and more user generated content is available. Figure FIGREF2 is a snapshot of reader comments under the news report “The most important announcements from Google's big developers' conference”. The content of the original news report talks about some new products based on AI techniques. The news report generally conveys an enthusiastic tone. However, while some readers share similar enthusiasms, some others express their worries about new products and technologies and these comments can also reflect their interests which may not be very salient in the original news reports. Unfortunately, existing MDS approaches cannot handle this issue. We investigate this problem known as reader-aware multi-document summarization (RA-MDS). Under the RA-MDS setting, one should jointly consider news documents and reader comments when generating the summaries.
One challenge of the RA-MDS problem is how to conduct salience estimation by jointly considering the focus of news reports and the reader interests revealed by comments. Meanwhile, the model should be insensitive to the availability of diverse aspects of reader comments. Another challenge is that reader comments are very noisy, not fully grammatical and often expressed in informal expressions. Some previous works explore the effect of comments or social contexts in single document summarization such as blog summarization BIBREF7 , BIBREF8 . However, the problem setting of RA-MDS is more challenging because the considered comments are about an event which is described by multiple documents spanning a time period. Another challenge is that reader comments are very diverse and noisy. Recently, BIBREF9 employed a sparse coding based framework for RA-MDS jointly considering news documents and reader comments via an unsupervised data reconstruction strategy. However, they only used the bag-of-words method to represent texts, which cannot capture the complex relationship between documents and comments.
Recently, BIBREF6 proposed a sentence salience estimation framework known as VAESum based on a neural generative model called Variational Auto-Encoders (VAEs) BIBREF10 , BIBREF11 . During our investigation, we find that the Gaussian based VAEs have a strong ability to capture the salience information and filter the noise from texts. Intuitively, if we feed both the news sentences and the comment sentences into the VAEs, commonly existed latent aspect information from both of them will be enhanced and become salient. Inspired by this consideration, to address the sentence salience estimation problem for RA-MDS by jointly considering news documents and reader comments, we extend the VAESum framework by training the news sentence latent model and the comment sentence latent model simultaneously by sharing the neural parameters. After estimating the sentence salience, we employ a phrase based compressive unified optimization framework to generate a final summary.
There is a lack of high-quality dataset suitable for RA-MDS. Existing datasets from DUC and TAC are not appropriate. Therefore, we introduce a new dataset for RA-MDS. We employed some experts to conduct the tasks of data collection, aspect annotation, and summary writing as well as scrutinizing. To our best knowledge, this is the first dataset for RA-MDS.
Our contributions are as follows: (1) We investigate the RA-MDS problem and introduce a new dataset for the problem of RA-MDS. To our best knowledge, it is the first dataset for RA-MDS. (2) To tackle the RA-MDS, we extend a VAEs-based MDS framework by jointly considering news documents and reader comments. (3) Experimental results show that reader comments can improve the summarization performance, which also demonstrates the usefulness of the dataset.
During our investigation, we find that the Gaussian based VAEs have a strong ability to capture the salience information and filter the noise from texts.
|
Do the Gaussian based VAEs have a strong ability to capture the salience information and filter the noise from texts?
|
Yes, they do.
|
1706.08032
| false
| null |
Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .
Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.
Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 .
Stanford - Twitter Sentiment Corpus (STS Corpus): STS Corpus contains 1,600K training tweets collected by a crawler from BIBREF0 . BIBREF0 constructed a test set manually with 177 negative and 182 positive tweets. The Stanford test set is small. However, it has been widely used in different evaluation tasks BIBREF0 BIBREF5 BIBREF13 .
Sanders - Twitter Sentiment Corpus: This dataset consists of hand-classified tweets collected by using search terms: INLINEFORM0 , #google, #microsoft and #twitter. We construct the dataset as BIBREF14 for binary classification.
Health Care Reform (HCR): This dataset was constructed by crawling tweets containing the hashtag #hcr BIBREF15 . Task is to predict positive/negative tweets BIBREF14 .
|
Which three Twitter sentiment classification datasets are used for experiments?
|
The answers are shown as follows:
* Stanford - Twitter Sentiment Corpus (STS Corpus)
* Sanders - Twitter Sentiment Corpus
* Health Care Reform (HCR)
|
null | false
| null |
What are lots of different smells you might experience when going into a candle store?
|
Beachwood, Ocean breeze, coconut lime, Jasmine, Musk and Apple Cider
|
|
null | false
| 333
|
Unsupervised methods for learning distributed representations of words are ubiquitous in today's NLP research, but far less is known about the best ways to learn distributed phrase or sentence representations from unlabelled data. This paper is a systematic comparison of models that learn such representations. We find that the optimal approach depends critically on the intended application. Deeper, more complex models are preferable for representations to be used in supervised systems, but shallow log-linear models work best for building representation spaces that can be decoded with simple spatial distance metrics. We also propose two new unsupervised representation-learning objectives designed to optimise the trade-off between training time, domain portability and performance.
We also propose two new phrase or sentence representation learning objectives - Sequential Denoising Autoencoders (SDAEs) and FastSent, a sentence-level log-linear bag-of-words model.
|
What two new phrase or sentence representation learning objectives do the authors propose?
|
Sequential Denoising Autoencoders (SDAEs) and FastSent, a sentence-level log-linear bag-of-words model.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.