paper_id
stringlengths 10
10
⌀ | yes_no
bool 2
classes | paper_index
int64 0
519
⌀ | evidence
stringlengths 0
37.7k
| question
stringlengths 4
11.7k
| answer
stringlengths 1
26k
|
|---|---|---|---|---|---|
null | false
| 159
|
The slots and values are separated parameters used in the encoder side. This embeds the source information into a vector representation $\textbf {z}_{i}$ which is a concatenation of embedding vector representation of each slot-value pair, and is computed by:
$$\textbf {z}_{i} = \textbf {u}_{i} \oplus \textbf {v}_{i}$$ (Eq. 10)
where $\textbf {u}_{i}$ , $\textbf {v}_{i}$ are the $i$ -th slot and value embedding vectors, respectively, and $\oplus $ is vector concatenation. The i index runs over the $L$ given slot-value pairs. In this work, we use a 1-layer, Bidirectional LSTM (Bi-LSTM) to encode the sequence of slot-value pairs embedding. The Bi-LSTM consists of forward and backward LSTMs which read the sequence of slot-value pairs from left-to-right and right-to-left to produce forward and backward sequence of hidden states ( $\overrightarrow{\textbf {e}_{1}}, .., \overrightarrow{\textbf {e}_{L}}$ ), and ( $\overleftarrow{\textbf {e}_{1}}, .., \overleftarrow{\textbf {e}_{L}}$ ), respectively. We then obtain the sequence of encoded hidden states $\textbf {E}=(\textbf {e}_{1}, \textbf {e}_{2}, .., \textbf {e}_{L})$ where $\textbf {\textbf {e}}_{i}$ is a sum of the forward hidden state $\overrightarrow{\textbf {e}_{i}}$ and the backward one $\textbf {v}_{i}$0 as follows:
$$\textbf {e}_{i}=\overrightarrow{\textbf {e}_{i}} + \overleftarrow{\textbf {e}_{i}}$$ (Eq. 12)
In this work, we use a 1-layer, Bidirectional LSTM (Bi-LSTM) to encode the sequence of slotvalue pairs embedding.
|
How is the sequence of slot value pairs embedding encoded?
|
They use a 1-layer, Bidirectional LSTM (Bi-LSTM) to encode the sequence of slotvalue pairs embedding.
|
1909.13668
| false
| null |
Given the above interpretation, we now turn to a slightly different formulation of ELBO based on $\beta $-VAE BIBREF15. This allows control of the trade-off between the reconstruction and KL terms, as well as to set explicit KL value. While $\beta $-VAE offers regularizing the ELBO via an additional coefficient $\beta \in {\rm I\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term,
where $C\!\! \in \!\! {\rm I\!R}^+$ and $| . |$ denotes the absolute value. While we could apply constraint optimization to impose the explicit constraint of $\text{KL}\!\!=\!\!C$, we found that the above objective function satisfies the constraint (experiment). Alternatively, it has been shown BIBREF21 the similar effect could be reached by replacing the second term in eqn. DISPLAY_FORM6 with $\max \big (C,D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )\big )$ at the risk of breaking the ELBO when $\text{KL}\!\!<\!\!C$ BIBREF22.
While $\beta $-VAE offers regularizing the ELBO via an additional coefficient $\beta \in {\rm I\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term,
where $C\!\! \in \!\! {\rm I\!R}^+$ and $| . |$ denotes the absolute value.
|
How does explicit constraint on the KL divergence term that authors propose looks like?
|
Answer with content missing: (Formula 2) Formula 2 is an answer:
\big \langle\! \log p_\theta({x}|{z}) \big \rangle_{q_\phi({z}|{x})} - \beta |D_{KL}\big(q_\phi({z}|{x}) || p({z})\big)-C|
|
null | false
| null |
What should I do if water is dripping from the ceiling inside my house?
|
The first task is to figure out if the leak is coming from inside of your pipes or from external moisture like rain. If the leak only happens when it is raining, then shut off the main water supply and see if the leak still happens. If yes, then it is likely from rain. If it has been sunny and dry for a week and you are still getting the leak, shut the water off and see if the leak stops. If it does, then that is likely from an internal leak. If it is in fact an internal leak, then call a reputable local plumber to have them test your pipes for pressure loss. If it is an external leak, then wait for sunny weather. After about a week of dry sunny weather, get a garden hose and pick a feature of your roof or external walls and spray that feature (vent pipe, window, roof juncture, etc) with a direct stream of high volume and low velocity water from the garden hose. Wait for a few hours, and see if the leak appears. After four hours, if no leak has appeared, then move on to the next feature. If hosing down a specific feature does in fact cause dripping within that four hour window, then either call a roof repair company or go to your local hardware store and talk to an expert about ways to seal cracks or gaps in siding or roofing material.
|
|
null | false
| null |
Silvio Berlusconi (/ˌbɛərlʊˈskoʊni/ BAIR-luu-SKOH-nee; Italian: [ˈsilvjo berluˈskoːni] (listen); born 29 September 1936) is an Italian media tycoon and politician who served as Prime Minister of Italy in four governments from 1994 to 1995, 2001 to 2006 and 2008 to 2011. He was a member of the Chamber of Deputies from 1994 to 2013, and has served as a member of the Senate of the Republic since 2022, and previously from March to November 2013, and as a Member of the European Parliament (MEP) since 2019, and previously from 1999 to 2001.
Berlusconi is the controlling shareholder of Mediaset and owned the Italian football club A.C. Milan from 1986 to 2017. He is nicknamed Il Cavaliere (The Knight) for his Order of Merit for Labour.
The Order of Merit for Labour (Italian: Ordine al Merito del Lavoro) is an Italian order of chivalry that was founded in 1923 by King Vittorio Emanuele III. It is awarded to those "who have been singularly meritorious" in agriculture, industry and commerce, crafts, lending and insurance. It is a continuation of the earlier Ordine al Merito Agrario, Industriale e Commerciale founded in 1901. Members of the order may use the title Cavaliere del lavoro.
He voluntarily resigned from this order in March 2014. In 2018, Forbes ranked him as the 190th richest man in the world with a net worth of US$8 billion. In 2009, Forbes ranked him 12th in the list of the World's Most Powerful People due to his domination of Italian politics throughout more than twenty years at the head of the centre-right coalition.
Berlusconi was Prime Minister for nine years in total, making him the longest serving post-war Prime Minister of Italy, and the third longest-serving since Italian unification, after Benito Mussolini and Giovanni Giolitti. He was the leader of the centre-right party Forza Italia from 1994 to 2009, and its successor party The People of Freedom from 2009 to 2013. Since November 2013, he has led a revived Forza Italia. Berlusconi was the senior G8 leader from 2009 until 2011, and he currently holds the record for hosting G8 summits (having hosted three summits in Italy). After serving nearly 19 years as a member of the Chamber of Deputies, the country's lower house, he became a member of the Senate following the 2013 Italian general election.
|
Tell me when Silvio Berlusconi was born, how long has he been Prime Minister in Italy and what his nickname is
|
Silvio Berlusconi was born on September 29, 1936 and has been in charge as Prime Minister in Italy for a total of 9 years (from 1994 to 1995, 2001 to 2006 and 2008 to 2011). His nickname is Il Cavaliere (The Knight) because he was awarded the Order of Merit for Labour, which is an Italian order of chivalry awarded to those "who have been singularly meritorious" in agriculture, industry and commerce, crafts, lending and insurance.
|
null | false
| null |
Why is customer obsession important
|
Customer obsession is important to acquire new and maintain customer account and customer account growth over time. It helps to create trust with customers and build lasting relationships with them. When the customer says we only want to work with a single person who they have a relationship with this enables that person to grow accounts, find new use cases, and opportunities to drive platform consumption increasing company revenue. Its important as you work with customer for them to trust you in your ability to deliver world-class results and work that is reproducible and scalable in their account and across new and other existing accounts.
|
|
null | false
| null |
Logging was Seattle's first major industry, but by the late 19th century, the city had become a commercial and shipbuilding center as a gateway to Alaska during the Klondike Gold Rush. Growth after World War II was partially due to the local Boeing company, which established Seattle as a center for aircraft manufacturing. The Seattle area developed into a technology center from the 1980s onwards with companies like Microsoft becoming established in the region; Microsoft founder Bill Gates is a Seattleite by birth. Internet retailer Amazon was founded in Seattle in 1994, and major airline Alaska Airlines is based in SeaTac, Washington, serving Seattle's international airport, Seattle–Tacoma International Airport. The stream of new software, biotechnology, and Internet companies led to an economic revival, which increased the city's population by almost 50,000 between 1990 and 2000
|
From the passage provided, extract which companies are local to Seattle which led to its development. Separate them with a comma.
|
Boeing, Microsoft, Amazon, Alaska Airlines
|
2003.07758
| false
| null |
FLOAT SELECTED: Figure 5. The results are split for category and version of MDVC. The number of samples per category is given in parenthesis. The METEOR axis is cut up to the random performance level (7.16).
FLOAT SELECTED: Figure 5. The results are split for category and version of MDVC. The number of samples per category is given in parenthesis. The METEOR axis is cut up to the random performance level (7.16).
|
How many category tags are considered?
|
14 categories
|
null | false
| null |
I strongly believe in having a growth mindset and being a lifelong learner. However, I am a very busy person and I always find it hard to find time in the day to dedicate to educating myself on a variety of topics. I would love to read more, but I simply don't have the time to be able to do this today regularly. What are some ways that I can consume content to learn and exercise my mind, using the least amount of time as possible?
|
Time management is very hard for many people. Having a growth mindset is a fantastic way to continue to better yourself through all of life's troubles and tribulations. Here are a few ways that you can educate yourself either on the go, or with minimal time commitment:
1. TED (and TEDx) Talks - TED Talks are popular videos that present a great idea in 18 minutes or less. While TED talks are intended for a global audience, TEDx talks are intended for smaller audiences and are usually presented at a local level. TED and TEDx talk videos are readily available for consumption, for free from the website www.ted.com, via a mobiule app, as well as on Spotify, YouTube and many other content platforms.
2. Podcasts - Podcasts are a series of on-demand episodes, delivered orally, focused on a specific theme. Podcasts can be accessed from many types of devices and are readily available from many popular platforms, including Spotify, Apple Podcasts and iHeartRadio. Podcasts are focused on any number of topics, and can easily be listened to while on the go.
3. Newsletters - Newsletters are printed or electronic reports that are made available by many organizations to a subscribed audience. They are often written in short, digestible formats to provide the highlights of a variety of topics, and typically contain links to learn more, if desired. They usually take only a few minutes to read and are readily available from news sources like the Wall Street Journal, CNN, Fox News, as well as many websites and LinkedIn groups.
In addition to these ideas, simply dedicating a small amount of time each day to learning and improving yourself can go a long way. Try to dedicate 15 minutes every day, ideally at the same time each day, toward this and before long, it will become a habit.
|
|
null | false
| null |
“Social Distancing” refers to keeping a safe distance of
|
About 3 to 6 feet from others
|
|
null | false
| null |
Why do oceans have salt water but rivers have fresh water?
|
The water in rivers comes from rain. Rainwater does not have any salt in it. The river picks up a small amount of salt from the ground as it travels downhill. The salt enters the ocean at the river's mouth. The saltwater mixture becomes saltier as water evaporates.
|
|
null | false
| null |
Evidence of human colonization in Sri Lanka appears at the site of Balangoda. Balangoda Man arrived on the island about 125,000 years ago and has been identified as Mesolithic hunter-gatherers who lived in caves. Several of these caves, including the well-known Batadombalena and the Fa Hien Cave, have yielded many artifacts from these people, who are currently the first known inhabitants of the island.
Balangoda Man probably created Horton Plains, in the central hills, by burning the trees in order to catch game. However, the discovery of oats and barley on the plains at about 15,000 BCE suggests that agriculture had already developed at this early date.
Several minute granite tools (about 4 centimetres in length), earthenware, remnants of charred timber, and clay burial pots date to the Mesolithic. Human remains dating to 6000 BCE have been discovered during recent excavations around a cave at Warana Raja Maha Vihara and in the Kalatuwawa area.
|
Given this paragraph about the prehistory of Sri Lanka, what are the evidence for prehistoric human colonization in Sri Lanka?
|
The oldest prehistoric evidence of human colonization in Sri Lanka goes back to about 125,000 years to a man named Balangoda Man. He was a Mesolithic hunter-gatherer who lived in caves. Furthermore, human remains dating back to 6000 BCE have also been found in several cites in the country.
|
2002.00317
| false
| null |
For source and cited documents, we use English-language computer science articles and annotation from the S2-GORC dataset BIBREF7. S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets. Detailed statistics can be found in Table TABREF4.
S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets.
|
What is the size of the corpus?
|
The answers are shown as follows:
* 8.1 million scientific documents
* 154K computer science articles
* 622K citing sentences
|
null | false
| null |
What is the national day of Germany?
|
The National Day of Germany is called German Unity Day. It is celebrated on 3 October as a public holiday. This day commemorates the reunification of West and East Germany after which since the end of second world war in 1945, Federal Republic of Germany existed as a single German state.
|
|
null | false
| null |
Identify which instrument is string or woodwind: Clavinet, Ocarina
|
Ocarina is woodwind, Clavinet is string.
|
|
null | false
| 503
|
The following tables show our main experimental result, which is averaged over 5 runs. We denote the number of example per class per task at the top of each column. Overall, NCCL variants outperforms baseline methods especially in the forgetting metric. Our goal is to demonstrate the usefulness of the adaptive learning rate scheme to reduce the catastrophic forgetting, and verify the proposed theoretical convergence analysis. We remark that our adaptive learning rates successfully suppress forgetting by a large margin compared to baselines. Note that NCCL also outperform A-GEM, which does not maximize transfer ∇f It (x t ), ∇g Jt (x t ) > 0. Now, we can empirically demonstrate our theoretical guaranteed method by minimizing Γ t is valid.
We clipped β Ht to increase the performance. As we discussed earlier, we can prevent forgetting when ∇f It (x t ), ∇g Jt (x t ) > 0. However, we observe that ∇f It (x t ) 2 suddenly increases because of the interference at the previous step t − 1. The very large learning rate β Ht by the increased ∇f It (x t ) can force the model to fall into an arbitrary point that is likely to increase the loss of f . Clipping the learning rate reduces this problem and still has the effect of reducing the catastrophic forgetting term Γ t . By the property of quadratic polynomial, the catastrophic forgetting term is negative because the clipped value is smaller than the original learning rate.
We show that NCCL is a potentially powerful alternative for continual learning. Even with tiny replay memory, NCCL still performs better than some baselines. We note that NCCL shows the best performance on the forgetting metric. It implies that NCCL prevent catastrophic forgetting more efficiently than others by minimizing the catastrophic forgetting term in the proposed optimization problem. However, the accuracy is slightly lower than other baselines, which include experience replays. The purpose of our adaptive learning rate scheme is to prevent catastrophic forgetting, so the performance of current task is slightly lower than ER-Ring, stable-SGD, and ORTHOG-subspace. This result shows that the plasticity to learn a new task is restricted by NCCL variants with tiny memory. In particular, we would expect that NCCL would benefit from the additional enhancements in ORTHOG-SUBSPACE and stable SGD by introducing their techniques. In appendix, we add more results with larger sizes of memory, which shows that NCCL outperforms on the average accuracy. We conclude that the transfer effect by the small size of memory for NCCL is less effective.
We show that NCCL is a potentially powerful alternative for continual learning. Even with tiny replay memory, NCCL still performs better than some baselines. We note that NCCL shows the best performance on the forgetting metric. It implies that NCCL prevent catastrophic forgetting more efficiently than others by minimizing the catastrophic forgetting term in the proposed optimization problem. However, the accuracy is slightly lower than other baselines, which include experience replays. The purpose of our adaptive learning rate scheme is to prevent catastrophic forgetting, so the performance of current task is slightly lower than ER-Ring, stable-SGD, and ORTHOG-subspace. This result shows that the plasticity to learn a new task is restricted by NCCL variants with tiny memory. In particular, we would expect that NCCL would benefit from the additional enhancements in ORTHOG-SUBSPACE and stable SGD by introducing their techniques. In appendix, we add more results with larger sizes of memory, which shows that NCCL outperforms on the average accuracy. We conclude that the transfer effect by the small size of memory for NCCL is less effective.
|
The authors argue that "the adaptive learning rate scheme is to prevent catastrophic forgetting", but the average accuracy maybe is a more important metric?
|
We consider that both average acc and the forgetting metric matters. By considering only average acc, we can underestimate the forgetting by reaching the high performance on the current task. Even if our method shows slightly lower value on average acc, we can observe that our algorithm shows the better perfomance on the forgetting metric.
|
1809.10644
| false
| null |
On the SR dataset, we outperform BIBREF8 's text based model by 3 F1 points, while just falling short of the Text + Metadata Interleaved Training model. While we appreciate the potential added value of metadata, we believe a tweet-only classifier has merits because retrieving features from the social graph is not always tractable in production settings. Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters.
Excluding the embedding weights, our model requires 100k parameters , while BIBREF8 requires 250k parameters.
|
how much was the parameter difference between their model and previous methods?
|
The answers are shown as follows:
* our model requires 100k parameters , while BIBREF8 requires 250k parameters
|
null | false
| null |
Which characters belong to DC or Marvel Universe? Doctor Strange, Dr. Fate
|
Dr. Fate is DC, Doctor Strange is Marvel
|
|
null | false
| null |
It describes how large incumbent companies lose market share by listening to their customers and providing what appears to be the highest-value products, but new companies that serve low-value customers with poorly developed technology can improve that technology incrementally until it is good enough
|
Summarize The Innovator's Dilemma in a few sentence
|
The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail, first published in 1997, is the best-known work of the Harvard professor and businessman Clayton Christensen. It expands on the concept of disruptive technologies, a term he coined in a 1995 article "Disruptive Technologies: Catching the Wave". It describes how large incumbent companies lose market share by listening to their customers and providing what appears to be the highest-value products, but new companies that serve low-value customers with poorly developed technology can improve that technology incrementally until it is good enough to quickly take market share from established business. Christensen recommends that large companies maintain small, nimble divisions that attempt to replicate this phenomenon internally to avoid being blindsided and overtaken by startup competitors.
|
null | false
| null |
The BCG vaccine has proved quite effective against which infection
|
SAR infection
|
|
null | false
| null |
I want to take a skiing vacation. Which of the following countries would be suitable for doing this: Switzerland, Cyprus, Kiribati, the United States, Haiti.
|
Switzerland and the United States are suitable as skiing destinations. Both countries have mountainous areas with world-renowned ski resorts. The United States boasts skiing locations in multiple states, including Colorado, California, and Utah. Switzerland also has some of the most popular ski locations in the world such as Davos. Cyprus, Kiribati, and Haiti are not suitable skiing destinations, as none of these countries has high mountains that produce enough snow to ski.
|
|
null | false
| 32
|
We have described the NLP4IF@EMNLP-IJCNLP 2019 shared task on fine-grained propaganda identification. We received 25 and 12 submissions on the test set for the sentence-level classification and the fragment-level classification tasks, respectively. Overall, the sentence-level task was easier and most submitted systems managed to outperform the baseline. The fragment-level task proved to be much more challenging, with lower absolute scores, but most teams still managed to outperform the baseline.
We plan to make the schema and the dataset publicly available to be used beyond NLP4IF. We hope that the corpus would raise interest outside of the community of researchers studying propaganda: the techniques related to fallacies and the ones relying on emotions might provide a novel setting for researchers interested in Argumentation and Sentiment Analysis.
As a kind of advertisement, Task 11 at SemEval 2020 is a follow up of this shared task. It features two complimentary tasks:
Given a free-text article, identify the propagandist text spans.
Given a text span already flagged as propagandist and its context, identify the specific propaganda technique it contains.
This setting would allow participants to focus their efforts on binary sequence labeling for Task 1 and on multi-class classification for Task 2.
The fragment-level task proved to be much more challenging, with lower absolute scores, but most teams still managed to outperform the baseline.
|
What's the result of a fragment-level task?
|
The fragment-level task proved to be much more challenging, with lower absolute scores, but most teams still managed to outperform the baseline.
|
null | false
| null |
What do Logan, Ororo, Charles, Jean, and Scott have in common?
|
They are all X-Men.
|
|
null | false
| null |
What is a bank run?
|
To understand a bank run you have to first understand how a bank works. Banks take deposits from customers and agree to return the deposits at any time. However instead of charging a fee for storing the money, banks make a profit by investing the deposits, either by lending then out to businesses that need loans or by buying safe investments like treasuries.
The problem is that these investments typically are somewhat illiquid. So if a customer suddenly asks for their deposits back, the bank can’t immediately sell its investments to give it to them. To counter this, banks keep a pool of money on hand to handle day to day deposits and withdrawals.
However, if every customer wants to withdraw their deposits at once, the bank does not have enough money to give to every customer. This is a bank run.
|
|
null | false
| null |
How do you make brisket on a pellet smoker?
|
First you need to prepare your pellet smoker by adding pellets and setting the temperature to 200 degrees Fahrenheit. It is common to use oak, mesquite, hickory and pecan pellets. Next you will need to trim and season your brisket. It is common to use mustard as a binder for your seasoning but is not required. Common seasonings for brisket are salt, pepper and garlic. It is recommend to season the brisket for at least 30 minutes before cooking but for best results allow the seasoned brisket to rest overnight in your refrigerator. Once the brisket is properly seasoned, place the brisket on your pellet grill. It is common in Texas to place the brisket fat side up. In other areas of the country, it is more common to place the brisket fat side down. Allow your brisket to smoke until it reaches an internal temperature of 175 degrees. Depending on the size of the brisket this could take in excess of 12 hours. Once the brisket reaches an internal temperature of 175 degrees, wrap the brisket with butcher paper to build better bark. Alternatively you can wrap the brisket in foil which will finish the cooking faster but build less bark. Place the wrapped brisket back in your pellet smoker and increase the temperature to 275 degrees Fahrenheit. Once the brisket reaches a temperature of 205 degrees Fahrenheit remove the brisket from our pellet smoker and allow it to rest in a cooler or oven until the brisket reaches an internal temperature of 140 degrees. Depending on the size of the brisket this could take between 2 and 3 hours. Slice your brisket and serve.
|
|
null | false
| null |
The archipelago includes the islands of Madeira, Porto Santo, and the Desertas, administered together with the separate archipelago of the Savage Islands.
|
Extract the islands that are part of the Madeira and Savage Islands archipelagos in Portugal. Separate them with a comma.
|
Madeira, Porto Santo, Desertas, Savage Islands
|
1710.00341
| true
| null |
FLOAT SELECTED: Figure 3: Example from the cQA forum dataset.
FLOAT SELECTED: Figure 3: Example from the cQA forum dataset.
|
Do they report results only on English data?
|
Yes.
|
null | false
| null |
Identify which animal is domesticated or wild: Western honey bee, European honey bee, Chinese honey bee, Japanese honey bee, Indian honey bee
|
Western honey bee is domesticated, European honey bee, Chinese honey bee, Japanese honey bee, and Indian honey bee are wild.
|
|
null | false
| null |
What is it called when a river of water falls off a cliff and creates a shower?
|
A waterfall
|
|
null | false
| null |
write a short essay on murals in jersey city
|
Jersey City is known for its vibrant street art scene, with murals adorning walls throughout the city. These murals not only add beauty and character to the city, but also serve as a form of public art that is accessible to everyone.
The murals in Jersey City come in all shapes and sizes, with a variety of themes and styles. Some feature colorful abstract designs, while others depict realistic images of people, animals, and landscapes. Many of the murals also incorporate social and political messages, addressing issues such as social justice, inequality, and the environment.
One of the most famous murals in Jersey City is the "Sixth Borough" mural, which depicts the city as a mythical sixth borough of New York City. This mural, which covers an entire building, has become an iconic symbol of Jersey City's identity and sense of community.
The murals in Jersey City are created by a diverse group of artists, both local and international. Some are commissioned by the city or private businesses, while others are created by artists who are drawn to the city's thriving street art scene. Many of the murals are created as part of organized events, such as the annual Jersey City Mural Festival, which brings together artists from around the world to create new works throughout the city.
The murals in Jersey City are not only beautiful and inspiring, but also serve as a reflection of the city's culture, history, and values. They are a testament to the power of art to bring people together, to inspire change, and to make the world a more beautiful and meaningful place.
|
|
null | false
| null |
Identify which instrument is string or percussion: Cultrun, Kora
|
Kora is string, Cultrun is percussion.
|
|
null | false
| 510
|
Where, M(#)[Cne] and Mo(#)[Cne] refer to the output logits corresponding to the remaining classes produced by M and Mo, respectively. # can be either xei or xnej. Lekd and Lnekd refer to knowledge distillation-based regularization loss for the examples from the excluded and non-excluded classes, respectively. Lkd refers to the overall knowledge distillation-based regularization loss for each mini-batch.
|
From Figure 3, KD is defined for remaining classes only, but why does the KD loss also include restricted classes?
|
We apologize for this confusion. However, Figure 3. is correct. We simply wanted to show that the KD loss is applied to only the logits corresponding to the remaining classes for the training examples from both excluded and non-excluded classes. We used curly brackets to highlight that the KD loss preserves the logits corresponding only to the non-excluded classes. We have also mentioned this point in Sec. 4 of the paper. We have further clarified this point in the revised paper by modifying Figure 3 to avoid any confusion.
|
null | false
| null |
Which of these artists can be considered yacht rock? Hall and Oates, Journey, the Doobie Brothers, Prince, the Artist formerly known as Prince, Foghat, The Grateful Dead, the Little River Band, Christopher Cross, Van Halen, Fleetwood Mac, Kenny Loggins, Twisted Sister, Toto.
|
The following artists can be considered yacht rock: Hall and Oates, the Doobie Brothers, Foghat, The Little River Band, Christopher Cross, Kenny Loggins, Toto.
|
|
null | false
| null |
Lavender is a light shade of purple or violet. It applies particularly to the color of the flower of the same name. The web color called lavender is displayed at right—it matches the color of the very palest part of the lavender flower; however, the more saturated color shown below as floral lavender more closely matches the average color of the lavender flower as shown in the picture and is the tone of lavender historically and traditionally considered lavender by the average person as opposed to those who are website designers. The color lavender might be described as a medium purple or a light pinkish-purple. The term lavender may be used in general to apply to a wide range of pale, light or grayish-purples but only on the blue side. Lilac is pale purple on the pink side. In paints, the color lavender is made by mixing purple and white paint.
The first recorded use of the word lavender as a color term in English was in 1705.
|
What color is Lavendar?
|
Lavender is a light shade of purple or violet.
|
null | false
| null |
what is the date when the declaration of independence was written?
|
July 4th 1776
|
|
null | false
| null |
Ethiopia is one of the oldest countries in Africa, the emergence of Ethiopian civilization dates back thousands of years. Due to migration and imperial expansion, it grew to include many other primarily Afro-Asiatic-speaking communities, including Amhara, Oromos, Somalis, Tigray, Afars, Sidama, Gurage, Agaw and Harari, among others. One of the first kingdoms to rise to power in the territory was the kingdom of D'mt in the 10th century BC, which established its capital at Yeha. In the first century AD the Aksumite Kingdom rose to power in the Tigray Region with its capital at Aksum and grew into a major power on the Red Sea, subjugating Yemen and Meroe. In the early fourth century, during the reign of Ezana, Christianity was declared the state religion. Ezana's reign is also when the Aksumites first identified themselves as "Ethiopians", and not long after, Philostorgius became the first foreign author to call the Aksumites Ethiopians. The Aksumite empire fell into decline with the rise of Islam in the Arabian peninsula, which slowly shifted trade away from the Christian Aksum. It eventually became isolated, its economy slumped and Aksum's commercial domination of the region ended. The Aksumites gave way to the Zagwe dynasty, who established a new capital at Lalibela before giving way to the Solomonic dynasty in the 13th century. During the early Solomonic period, Ethiopia went through military reforms and imperial expansion that allowed it to dominate the Horn of Africa. Portuguese missionaries arrived at this time
|
Please give me a short bulleted list of key points about Ethiopia?
|
- Ethiopia is one of the older countries in Africa.
- In the first century AD the Aksumite Kingdom rose to power in the Tigray Region with its capital at Aksum
- During the early Solomonic period, Ethiopia went through military reforms and imperial expansion
|
1904.08386
| false
| null |
Given 55 city representations, how do we group them into eleven clusters of five cities each? Initially, we experimented with a graph-based community detection algorithm that maximizes cluster modularity BIBREF20 , but we found no simple way to constrain this method to produce a specific number of equally-sized clusters. The brute force approach of enumerating all possible cluster assignments is intractable given the large search space ( INLINEFORM0 possible assignments). We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. To evaluate the quality of the computationally-derived clusters against those of Calvino, we measure cluster purity BIBREF21 : given a set of predicted clusters INLINEFORM1 and ground-truth clusters INLINEFORM2 that both partition a set of INLINEFORM3 data points, INLINEFORM4
Initially, we experimented with a graph-based community detection algorithm that maximizes cluster modularity BIBREF20 , but we found no simple way to constrain this method to produce a specific number of equally-sized clusters. The brute force approach of enumerating all possible cluster assignments is intractable given the large search space ( INLINEFORM0 possible assignments). We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence. To evaluate the quality of the computationally-derived clusters against those of Calvino, we measure cluster purity BIBREF21 : given a set of predicted clusters INLINEFORM1 and ground-truth clusters INLINEFORM2 that both partition a set of INLINEFORM3 data points, INLINEFORM4
|
Which clustering method do they use to cluster city description embeddings?
|
The answers are shown as follows:
* We devise a simple clustering algorithm to approximate this process. First, we initialize with random cluster assignments and define “cluster strength” to be the relative difference between “intra-group” Euclidean distance and “inter-group” Euclidean distance. Then, we iteratively propose random exchanges of memberships, only accepting these proposals when the cluster strength increases, until convergence.
|
null | false
| null |
What are hoagies, or sub sandwiches called in the state of Rhode Island?
|
In the state of Rhode Island, these types of sandwiches are called grinders.
|
|
null | false
| null |
Write a dplyr statement takes a dataframe, filters it by the tastiness string column for "super awesome" then creates a new column called pct_sugar that takes the total_sugar metric and divides it by mass
|
library(dplyr)
dataframe %>%
filter(tastiness == "super awesome") %>%
mutate(pct_sugar, total_sugar/mass)
|
|
null | false
| null |
What is the origin of the western numeral system?
|
The western numeral system that we use today, and that is represented by the numbers 0 through 10, is also called European digits, Arabic numerals, or Hindu-Arabic numerals. The reason that it is linked to the Middle East is that in the 10th century, Arabic speakers introduced the system into Europe. However these numbers can be further traced back to their Indian origins.
|
|
null | false
| null |
who of the following are not soccer players from Argentina: Diego Armando Maradona, Lionel Messi, Alfredo Di Stefano, Mario Alberto Kempes, and Mauro Camoranesi?
|
Mauro Camoranesi. He was born in Argentina. He moved to Italy and played for Juventus. He get both Italian citizenship and Argentina citizenship. And he chose Italy and represented the team and won World cup in 2006.
|
|
1610.09516
| false
| null |
In our profile verification process, we observed that most gang member profiles portray a context representative of gang culture. Some examples of these profile pictures are shown in Figure FIGREF32 , where the user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash. Descriptions of these images may thus empower our classifier. Thus, we translated profile images into features with the Clarifai web service. Clarifai offers a free API to query a deep learning system that tags images with a set of scored keywords that reflect what is seen in the image. We tagged the profile image and cover image for each profile using 20 tags identified by Clarifai. Figure FIGREF36 offers the 20 most often used tags applied to gang and non-gang member profiles. Since we take all the tags returned for an image, we see common words such as people and adult coming up in the top 20 tag set. However, gang member profile images were assigned unique tags such as trigger, bullet, worship while non-gang images were uniquely tagged with beach, seashore, dawn, wildlife, sand, pet. The set of tags returned by Clarifai were thus considered as features for the classifier.
In our profile verification process, we observed that most gang member profiles portray a context representative of gang culture. Some examples of these profile pictures are shown in Figure FIGREF32 , where the user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash.
|
What are the differences in the use of images between gang member and the rest of the Twitter population?
|
The answers are shown as follows:
* user holds or points weapons, is seen in a group fashion which displays a gangster culture, or is showing off graffiti, hand signs, tattoos and bulk cash
|
null | false
| null |
Who is the current king of the Netherlands
|
The current king of the Netherlands is Willem-Alexander. He is the second son of King Willem-Alexander and Queen Maxima. He ascended the throne on April 30, 2013, following the abdication of his father.
|
|
null | false
| null |
Chardonnay (UK: /ˈʃɑːrdəneɪ/, US: /ˌʃɑːrdənˈeɪ/, French: [ʃaʁdɔnɛ] (listen)) is a green-skinned grape variety used in the production of white wine. The variety originated in the Burgundy wine region of eastern France, but is now grown wherever wine is produced, from England to New Zealand. For new and developing wine regions, growing Chardonnay is seen as a 'rite of passage' and an easy entry into the international wine market.
The Chardonnay grape itself is neutral, with many of the flavors commonly associated with the wine being derived from such influences as terroir and oak. It is vinified in many different styles, from the lean, crisply mineral wines of Chablis, France, to New World wines with oak and tropical fruit flavors. In cool climates (such as Chablis and the Carneros AVA of California), Chardonnay wine tends to be medium to light body with noticeable acidity and flavors of green plum, apple, and pear. In warmer locations (such as the Adelaide Hills and Mornington Peninsula in Australia and Gisborne and Marlborough region of New Zealand), the flavors become more citrus, peach, and melon, while in very warm locations (such as the Central Coast AVA of California), more fig and tropical fruit notes such as banana and mango come out. Wines that have gone through malolactic fermentation tend to have softer acidity and fruit flavors with buttery mouthfeel and hazelnut notes.
|
What is Chardonnay?
|
Chardonnay (UK: /ˈʃɑːrdəneɪ/, US: /ˌʃɑːrdənˈeɪ/, French: [ʃaʁdɔnɛ] (listen)) is a green-skinned grape variety used in the production of white wine. The Chardonnay grape itself is neutral, with many of the flavors commonly associated with the wine being derived from such influences as terroir and oak. It is vinified in many different styles, from the lean, crisply mineral wines of Chablis, France, to New World wines with oak and tropical fruit flavors.
|
null | false
| null |
How do you know if you reached product market fit?
|
Product market fit, a term used by the startups ecosystem to identify if they have a product that satisfies a need for a number of customers for a problem they are currently facing. There is not a single unit of measurement if a startup reached product market fit or not. One of the ways to measure product market fit is if a startup is able to develop a repeatable sales process and are able to do basic forecasting for revenue. A product market fit is crucially important for the first product released by a startup but it’s also important for all upcoming releases of new products.
|
|
null | false
| null |
Hamilton's first season in Formula One saw him partner two-time and defending World Champion Fernando Alonso. Hamilton is the first and, as of 2022, the only black driver to race in the series. After finishing on the podium in his debut, Hamilton went on to set several records as he finished runner-up in the 2007 World Drivers' Championship to Kimi Räikkönen by one point, including those for the most consecutive podium finishes from debut (9), the joint most wins in a debut season (4) and the most points in a debut season (109). Throughout the season, Hamilton and Alonso were involved in a number of incidents which resulted in tensions between both drivers and the team, culminating in Alonso and McLaren terminating their contract by mutual consent in November. Following a successful first season at McLaren, Hamilton signed a multi-million-pound contract to stay with the team until 2012.
|
Is it true that Lewis Hamilton won the championship in his first F1 season?
|
Lewis Hamilton did not win the F1 world driver's championship in his rookie season in 2007. The title went to Kimi Räikkönen in the Ferrari team. Hamilton finished runner up, in second place, after a close battle with both Ferrari and his teammate Fernando Alonso.
|
null | false
| null |
Identify which instrument is string or woodwind: Bass guitar, Duduk
|
Bass guitar is string, Duduk is woodwind.
|
|
null | false
| null |
Original Joe's is a duo of restaurants in the North Beach neighborhood of San Francisco and the Westlake neighborhood of Daly City. They serve a wide variety of foods, mostly Italian-American cuisine with some mainstream American favorites. Their "signature" dishes include Joe's Special, Chicken Parmigiana, the Joe's Famous Hamburger Sandwich and a variety of steaks and chops.
|
What is Original Joe's restaurant and where are they located?
|
Original Joe's is a duo of restaurants in the North Beach neighborhood of San Francisco and the Westlake neighborhood of Daly City.
|
1912.07025
| false
| null |
FLOAT SELECTED: TABLE III: Scripts in the INDISCAPES dataset.
FLOAT SELECTED: TABLE III: Scripts in the INDISCAPES dataset.
|
How many documents are in the Indiscapes dataset?
|
508
|
null | false
| null |
How many cents do I have if I have 3 dollars and a dime?
|
If you have 3 dollars and a dime, you have 310 cents in total.
|
|
null | false
| null |
Shio (塩, 'salt') ramen is the oldest of the four types. It has a pale, clear, yellowish broth made with plenty of salt and any combination of chicken, vegetables, fish, and seaweed. Occasionally pork bones are also used, but they are not boiled as long as they are for tonkotsu ramen, so the soup remains light and clear. Chāshū is sometimes swapped for lean chicken meatballs, and pickled plums and kamaboko (a slice of processed fish roll sometimes served as a frilly white circle with a pink or red spiral called narutomaki) are popular toppings as well. Noodle texture and thickness varies among shio ramen, but they are usually straight rather than curly. Hakodate ramen is a well-known version of shio ramen in Japan.
Shōyu (醤油, 'soy sauce') ramen has a clear brown broth, based on a chicken and vegetable (or sometimes fish or beef) stock with plenty of soy sauce added resulting in a soup that is tangy, salty, and savory yet still fairly light on the palate. Shōyu ramen usually has curly noodles rather than straight ones, although this is not always the case. It is often adorned with marinated bamboo shoots or menma, scallions, ninjin ('carrot'), kamaboko ('fish cakes'), nori ('seaweed'), boiled eggs, bean sprouts or black pepper; occasionally the soup will also contain chili oil or Chinese spices, and some shops serve sliced beef instead of the usual chāshū.
Miso (味噌) ramen reached national prominence around 1965. This uniquely Japanese ramen, which was developed in Sapporo Hokkaido, features a broth that combines copious miso and is blended with oily chicken or fish broth – and sometimes with tonkotsu or lard – to create a thick, nutty, slightly sweet and very hearty soup. Miso ramen broth tends to have a robust, tangy flavor, so it stands up to a variety of flavorful toppings: spicy bean paste or tōbanjan (豆瓣醤), butter and corn, leeks, onions, bean sprouts, ground pork, cabbage, sesame seeds, white pepper, chilli and chopped garlic are common. The noodles are typically thick, curly, and slightly chewy.
Karē (カレー, 'curry') ramen is a relative newcomer, cooked with curry soup. In Japan, several cities claim to be its place of origin. The city of Muroran claims it originated there in 1965 (see also Muroran curry ramen), while the city of Sanjō claims to have had karē ramen for over 80 years, and the city of Katori also claims to have been the site of its origin. Curry soup is mainly made with pork bones and vegetables and is seasoned with curry. The noodles are thick and curly. Toppings include chāshū, wakame, and bean sprouts.
|
Extract the names of the ramen. separate them with comma.
|
Shio, Shoyu, Miso, Kare
|
null | false
| null |
The years following 2009 marked a shift in the structure of the "Top Four" with Tottenham Hotspur and Manchester City both breaking into the top four places on a regular basis, turning the "Top Four" into the "Big Six". In the 2009–10 season, Tottenham finished fourth and became the first team to break the top four since Everton five years prior. Criticism of the gap between an elite group of "super clubs" and the majority of the Premier League has continued, nevertheless, due to their increasing ability to spend more than the other Premier League clubs. Manchester City won the title in the 2011–12 season, becoming the first club outside the "Big Four" to win since Blackburn Rovers in the 1994–95 season. That season also saw two of the "Big Four" (Chelsea and Liverpool) finish outside the top four places for the first time since that season.
With only four UEFA Champions League qualifying places available in the league, greater competition for qualification now exists, albeit from a narrow base of six clubs. In the five seasons following the 2011–12 campaign, Manchester United and Liverpool both found themselves outside of the top four three times, while Chelsea finished 10th in the 2015–16 season. Arsenal finished 5th in 2016–17, ending their record run of 20 consecutive top-four finishes.
In the 2015–16 season, the top four was breached by a non-Big Six side for the first time since Everton in 2005. Leicester City were the surprise winners of the league, qualifying for the Champions League as a result.
|
Who are the big six in EPL?
|
Arsenal, Liverpool, Manchester United, Chelsea, Manchester City and Tottenham are known as the big six in the English Premier League.
|
1709.00947
| false
| null |
On the right side of Figure FIGREF28 we show how the number of training (and validation) examples affects the loss. For a fixed INLINEFORM0 = 32768 we varied the amount of data used for training from 25% to 100%. Three trends are apparent. As we train with more data, we obtain better validation losses. This was expected. The second trend is that by using less than 50% of the data available the model tends to overfit the data, as indicated by the consistent increase in the validation loss after about 15 epochs (check dashed lines in right side of Figure FIGREF28 ). This suggests that for the future we should not try any drastic reduction of the training data to save training time. Finally, when not overfitting, the validation loss seems to stabilize after around 20 epochs. We observed no phase-transition effects (the model seems simple enough for not showing that type of behavior). This indicates we have a practical way of safely deciding when to stop training the model.
The second trend is that by using less than 50% of the data available the model tends to overfit the data, as indicated by the consistent increase in the validation loss after about 15 epochs (check dashed lines in right side of Figure FIGREF28 ).
|
What experimental results suggest that using less than 50% of the available training examples might result in overfitting?
|
The answers are shown as follows:
* consistent increase in the validation loss after about 15 epochs
|
null | false
| null |
George Washington was born on February 22, 1732, at Popes Creek in Westmoreland County, in the British colony of Virginia, and was the first of six children of Augustine and Mary Ball Washington.
|
When was George Washington born?
|
George Washington was born on February 22, 1732.
|
null | false
| null |
Why is Arizona the best state to live in?
|
Arizona is the best state to live in for numerous reasons. First, the tax situation is highly advantageous with a 2.55 percent flat state tax on all income. Second, the weather is incredible for 7-8 months per year, especially if you like the heat. The cost of living is relatively average and there are many growing, young towns to choose from like Gilbert and Chandler. Arizona has all 4 major sports team and continually gets major events, like the Super Bowl which was in Arizona this past year. Last but not least, Arizona has great hiking, biking trails, easy access to airport, and located close to Grand Canyon/Zion from Phoenix.
|
|
null | false
| 92
|
To paraphrase the great bard BIBREF21, there is something rotten in the state of the art. We propose Human-And-Model-in-the-Loop Entailment Training (HAMLET), a training procedure to automatically mitigate problems with current dataset collection procedures (see Figure FIGREF1).
In our setup, our starting point is a base model, trained on NLI data. Rather than employing automated adversarial methods, here the model's “adversary” is a human annotator. Given a context (also often called a “premise” in NLI), and a desired target label, we ask the human writer to provide a hypothesis that fools the model into misclassifying the label. One can think of the writer as a “white hat” hacker, trying to identify vulnerabilities in the system. For each human-generated example that is misclassified, we also ask the writer to provide a reason why they believe it was misclassified.
For examples that the model misclassified, it is necessary to verify that they are actually correct —i.e., that the given context-hypothesis pairs genuinely have their specified target label. The best way to do this is to have them checked by another human. Hence, we provide the example to human verifiers. If two human verifiers agree with the writer, the example is considered a good example. If they disagree, we ask a third human verifier to break the tie. If there is still disagreement between the writer and the verifiers, the example is discarded. Occasionally, verifiers will overrule the original label of the writer.
Once data collection for the current round is finished, we construct a new training set from the collected data, with accompanying development and test sets. While the training set includes correctly classified examples, the development and tests sets are built solely from them. The test set was further restricted so as to: 1) include pairs from “exclusive” annotators that are never included in the training data; and 2) be balanced by label classes (and genres, where applicable). We subsequently train a new model on this and other existing data, and repeat the procedure three times.
We propose Human-And-Model-in-the-Loop Enabled Training (HAMLET), a training procedure to automatically mitigate problems with current dataset collection procedures (see Figure 1).
|
What is Human-And-Model-in-the-Loop Enabled Training (HAMLET) proposed by the authors?
|
It is a training procedure to automatically mitigate problems with current dataset collection procedures.
|
null | false
| null |
Write an email to a friend you haven't seen in a long time telling them about your move to California
|
Dear Florence -
I hope you are doing well. It was such a joy to get your letter in the mail. I normally just get a lot of junk mail and bills. I am happy to hear you've settled into your new home and things in general are going well for you.
I wanted to write you back to tell you about my move to California! The move itself was fairly uneventful. The hardest part was dealing with Felix who refused to go into his cat carrier and was generally upset over the whole ordeal.
California so far is great. Everything is expensive (Yikes!) but the food is amazing, the nature is stunning, and there is so much to see and do. I feel like I will never run out of fun weekend trips or restaurants to try. You will have to visit me soon! I have a spare bedroom and you could stay with me for a long weekend or more if you want. It would be so much fun to show you all the things I've found so far and to get to spend some quality time together.
When you get a chance, please do drop me a line or give me a call. I love hearing from you and miss our long talks.
Your friend,
Shirley
|
|
null | false
| null |
Schools and universities typically have a summer break to take advantage of the warmer weather and longer days. In almost all countries, children are out of school during this time of year for summer break, although dates vary. Many families will take holidays for a week or two over summer, particularly in Southern Hemisphere Western countries with statutory Christmas and New Year holidays.
In the United States, public schools usually end in late May in Memorial Day weekend, while colleges finish in early May. Public school traditionally resumes near Labor Day, while higher institutions often resume in mid-August.
In England and Wales, school ends in mid-July and resumes again in early September. In Scotland, the summer holiday begins in late June and ends in mid-to late-August. Similarly, in Canada the summer holiday starts on the last or second-last Friday in June and ends in late August or on the first Tuesday of September, with the exception of when that date falls before Labour Day, in which case, ends on the second Tuesday of the month. In Russia the summer holiday begins at the end of May and ends on 31 August.
In the Southern Hemisphere, school summer holiday dates include the major holidays of Christmas and New Year's Day. School summer holidays in Australia, New Zealand and South Africa begin in early December and end in early February, with dates varying between states. In South Africa, the new school year usually starts during the second week of January, thus aligning the academic year with the Calendar year. In India, school ends in late April and resumes in early or mid-June. In Cameroon and Nigeria, schools usually finish for summer vacation in mid-July and resume in the later weeks of September or the first week of October.
|
Based on the following passage, create a list os when schools close and resume in England and Wales, Scotland and Canada.
|
1. In England and Wales, school ends in mid-July and resumes again in early September.
2. In Scotland, the summer holiday begins in late June and ends in mid-to late-August.
3. In Canada the summer holiday starts on the last or second-last Friday in June and ends in late August or on the first Tuesday of September, with the exception of when that date falls before Labour Day, in which case, ends on the second Tuesday of the month.
|
null | false
| null |
Write a few paragraphs on the Smoky Mountains in the United States, and why it is a nice vacation spot.
|
The Smoky Mountains are located in Tennessee and North Carolina, and constitute a portion of the Appalachian Mountains along the mid-east of the United States. The Smoky Mountains boasts a wide variety of rich and diverse plant and animal life, which draws tourists from around the world. Outdoor activities in the Smoky Mountains include hiking, zip-lining, biking, boating, kayaking, fly-fishing, skiing, canyoning, to name a few.
The Smoky Mountains area also have popular destinations like Asheville, NC - a center of music, art, food, and culture. Asheville is home to the Biltmore Estate, America's largest privately-owned home, constructed by the Vanderbilt family in 1889.
Overall, the Smoky Mountain area offers a wide range of fun activities that make for a great vacation for all ages.
|
|
null | false
| null |
What are some popular open source products used for data analytics?
|
Apache Spark, MLFlow, and Delta Lake are popular open source products used to build Data Lakehouses.
|
|
1811.00942
| false
| null |
For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set.
For each model, we examined word-level perplexity, R@3 in next-word prediction, latency (ms/q), and energy usage (mJ/q). To explore the perplexity–recall relationship, we collected individual perplexity and recall statistics for each sentence in the test set.
|
What aspects have been compared between various language models?
|
Quality measures using perplexity and recall, and performance measured using latency and energy usage.
|
null | false
| null |
According to CBO (and others), the precise reasons for the rapid growth in income at the top are not well understood",: xi but involved multiple, possibly conflicting, factors.: xi
Causes include:
decline of labor unions – Unions weakened in part due to globalization and automation may account for one-third to more than one-half of the rise of inequality among men. Pressure on employers to increase wages and on lawmakers to enact worker-friendly measures declined. Rewards from productivity gains went to executives, investors and creditors. A study by Kristal and Cohen reported that rising wage inequality was driven more by declining unions and the fall in the real value of the minimum wage, with twice as much impact as technology. An alternative theory states that passthrough income's contribution is incorrectly attributed to capital rather than labor.
globalization – Low skilled American workers lost ground in the face of competition from low-wage workers in Asia and other "emerging" economies.
skill-biased technological change – Rapid progress in information technology increased the demand for skilled and educated workers.
superstars – Modern communication technologies often turn competition into a "winner take most" tournament in which the winner is richly rewarded, while the runners-up get far less.
financialization – In the 1990s stock market capitalization rose from 55% to 155% of Gross Domestic Product (GDP). Corporations began to shift executive compensation toward stock options, increasing incentives for managers to make decisions to increase share prices. Average annual CEO options increased from $500,000 to over $3 million. Stock comprised almost 50% of CEO compensation. Managers were incentivized to increase shareholder wealth rather than to improve long-term contracts with workers; between 2000 and 2007, nearly 75% of increased stock growth came at the cost of labor wages and salaries.
immigration of less-educated workers – Relatively high levels of immigration of low skilled workers since 1965 may have reduced wages for American-born high school dropouts;
college premium - Workers with college degrees traditionally earned more and faced a lower unemployment rate than others. Wealthy families are also more likely to send their children to schools which have large endowments, resulting in more grants and lower student debt. The cycle is completed when wealthier alums donate more and disproportionately increase the size of elite endowments. Elite colleges also have better access to financial expertise.
automation - The Bureau of Labor Statistics (BLS) found that increased automation had led to "an overall drop in the need for labor input. This would cause capital share to increase, relative to labor share, as machines replace some workers."
We haven't achieved the minimalist state that libertarians advocate. What we've achieved is a state too constrained to provide the public goods – investments in infrastructure, technology, and education – that would make for a vibrant economy and too weak to engage in the redistribution that is needed to create a fair society. But we have a state that is still large enough and distorted enough that it can provide a bounty of gifts to the wealthy.
—Joseph Stiglitz
policy – Krugman asserted that movement conservatives increased their influence over the Republican Party beginning in the 1970s. In the same era, it increased its political power. The result was less progressive tax laws, anti-labor policies, and slower expansion of the welfare state relative to other developed nations (e.g., the unique absence of universal healthcare). Further, variation in income inequality across developed countries indicate that policy has a significant influence on inequality; Japan, Sweden and France have income inequality around 1960 levels. The US was an early adopter of neoliberalism, whose focus on growth over equality spread to other countries over time.
corporatism and corpocracy – Excessive attention to the interests of corporations reduced scrutiny over compensation shifts.
female labor force participation – High earning households are more likely to be dual earner households.
stock ownership is tilted towards households at higher income and education levels, resulting in disparate investment income.
Higher income households are disproportionately likely to prosper when economic times are good, and to suffer losses during downturns. More of their income comes from relatively volatile capital income. For example, in 2011 the top 1% of income earners derived 37% of their income from labor, versus 62% for the middle quintile. The top 1% derived 58% of their income from capital as opposed to 4% for the middle quintile. Government transfers represented only 1% of the income of the top 1% but 25% for the middle quintile; the dollar amounts of these transfers tend to rise in recessions.
According to a 2018 report by the Organization of Economic Cooperation and Development (OECD), the US has higher income inequality and a larger percentage of low income workers than almost any other advanced nation because unemployed and at-risk workers get less support from the government and a weak collective bargaining system.
|
Extract the causes of income inequality in the United States mentioned in the text. Separate them with a comma.
|
The text lists the following as causes of income inequality in the United States: decline of labor unions, globalization, skill-biased technological change, superstars, financialization, immigration of less-educated workers, college premium, automation, public policy, corporatism and corpocracy, female labor force participation, and disproportionate stock ownership in households at higher income and education levels.
|
null | false
| 50
|
We presented unsupervised approaches for semantic representations of mathematical equations using their surrounding words. Across 4 different collections we showed that out methods offer more effective modeling compared to existing embedding models. We also demonstrate that they can capture the semantic similarity between equations and the words in the collection. In the future we plan to explore how EqEmb could be expend to represent other objects such as images, captions and inline figures.
In the future we plan to explore how EqEmb could be expend to represent other objects such as images, captions and inline figures.
|
What plan do they have for the future?
|
They plan to explore how EqEmb could be expend to represent other objects such as images, captions and inline figures.
|
1909.06522
| false
| null |
The character sets of these 7 languages have little overlap except that (i) they all include common basic Latin alphabet, and (ii) both Hindi and Marathi use Devanagari script. We took the union of 7 character sets therein as the multilingual grapheme set (Section SECREF2), which contained 432 characters. In addition, we deliberately split 7 languages into two groups, such that the languages within each group were more closely related in terms of language family, orthography or phonology. We thus built 3 multilingual ASR models trained on:
The character sets of these 7 languages have little overlap except that (i) they all include common basic Latin alphabet, and (ii) both Hindi and Marathi use Devanagari script. We took the union of 7 character sets therein as the multilingual grapheme set (Section SECREF2), which contained 432 characters.
|
How much of the ASR grapheme set is shared between languages?
|
Little overlap except common basic Latin alphabet and that Hindi and Marathi languages use same script.
|
null | false
| null |
What's loquat fruit and where does it originate from?
|
The loquat (Eriobotrya japonica) is a large evergreen shrub or tree, grown commercially for its orange fruit and for its leaves, which are used to make herbal tea. It is also cultivated as an ornamental plant.
The loquat is in the family Rosaceae, and is native to the cooler hill regions of south-central China. In Japan the loquat is known as biwa (枇杷, びわ), and has been grown for over 1,000 years. The loquat has been introduced to regions with subtropical to mild temperate climates throughout the world.
Eriobotrya japonica was formerly thought to be closely related to the genus Mespilus, and is still sometimes mistakenly known as the Japanese medlar. It is also known as Japanese plum and Chinese plum, as well as pipa in China, naspli in Malta, lukaat in India, lucat or loket in Sri Lanka, níspero in Spain, nêspera in Portugal, shések in Israel, akidéné in Lebanon, ebirangweti in Kisii, nespolo in Italy (where the name is shared with Mespilus germanica), and golabi jangali (jungle pear) in Iran.
|
|
null | false
| null |
Liebers is a German language surname. Notable people with the name include:
Mario Liebers (born 1960), German former competitive figure skater
Martin Liebers (born 1985), German former competitive figure skater
Matthias Liebers (born 1958), former German footballer
Peter Liebers (born 1988), German former figure skater.
|
Which notable person having surname Liebers in this list was born first?
|
Mario Liebers
|
null | false
| null |
What is the difference between longboard and shortboard surfing?
|
Surfing is a water sport in which the "surfer" stands on top of a board, and glides on top of a wave, typically riding with the wave from ocean to shore. The sport is typically divided into longboarding or shortboarding, although mid-length boards to exist. Longboarding is the original form of surfing, and typically refers to riding a board that is at least 8 feet in length.
Due to the size and volume of longboards, longboards are more stable and have increased paddle strength. When catching a wave, longboarders can be said to "cruise" and glide as their main motions after catching a wave. Shortboarding typically refers to a board that is less than 5 feet in length, and with less volume and length, shortboarders can manuever more quickly, performing cutbacks and carves when they catch they catch a wave.
Beginners will typically start with longboarding, although not all will go on to shortboarding, as shortboard surfing is not necessarily considered "progressive" to longboarding, and surfers will pick which style of surfing they feel is more suited to them.
|
|
null | false
| 377
|
The Recognizing Textual Entailment (RTE) challenges first appeared in 2004 as a means to test textual entailment, i.e. relations between a premise text and a hypothesis text ( "Recognizing Textual Entailment" ):
An entailment example from RTE1.
Budapest again became the focus of national political drama in the late 1980s, when Hungary led the reform movement in eastern Europe that broke the communist monopoly on political power and ushered in the possibility of multiparty politics.
In the late 1980s Budapest became the center of the reform movement.
Entailment [RTE702]
In contrast to the FraCaS test suite, the RTE challenges use naturally occurring data as premises. The hypothesis text is then constructed based on this premise text. There is either a binary or a tripartite classification of entailment — depending on the version of RTE. The first two RTE challenges follow the former scheme and make a binary classification of entailment (entailed or not entailed). Tripartite classification (entailment, negation of the hypothesis entailment or no entailment) is added in the later datasets, retaining two way classification versions as well. Seven RTE challenges have been created altogether.
The main advantages of the RTE challenges is their use of examples from natural text and the inclusion of cases that require presupposed information, mostly world knowledge. Indeed, the very definition of inference assumed in a number of the examples is problematic. As BIBREF1 have pointed out, RTE platforms suffer from cases of inference that should not be categorized as such. For these cases, a vast amount of world knowledge needs to be taken into consideration (that most importantly not every linguistic agent has). In this paper, and having the RTE as our starting point challenges, we claim that RTE is insufficiently precise to perform logical reasoning or precise reasoning tasks and we take up the task of validating our working hypothesis and proposing a method for doing proper collection of precise entailment pairs in the style of RTE. Of course, the creators of RTE had in mind a more loose definition of inference where both a precise and an imprecise definition of entailment would be at play. BIBREF2 mention that “our applied notion of textual entailment is also related, of course, to classical semantic entailment in the linguistics literature... a common definition of entailment specifies that a text t entails another text h (hypothesis, in our terminology) if h is true in every circumstance (possible world) in which t is true." This is close to what we want to capture in this paper. But, at the same time, BIBREF2 also mention that “however, our applied definition allows for cases in which the truth of the hypothesis is highly plausible, for most practical purposes, rather than certain". It is these cases we want to make more precise, in the sense of making the supporting hidden inferences that are at play in many of the RTE examples explicit. In a way, what we are aiming at is a methodology of constructing entailment datasets in the style of RTE, that will involve a more precise definition of entailment and will further record any missing/hidden premises are used in justifying or not an entailment pattern.
The Recognizing Textual Entailment (RTE) challenges first appeared in 2004 as a means to test textual entailment, i.e. relations between a premise text and a hypothesis text (3): (3) An entailment example from RTE1.
|
When the Recognizing Textual Entailment (RTE) challenges first appeared?
|
In 2004.
|
null | false
| null |
What's the best piece of advice you ever received?
|
You cannot learn everything about the world, but does it mean you should stop trying.
|
|
null | false
| null |
What are the applications of game theory in NBA basketball?
|
Game theory studies the strategic interaction of rationale agents. The National Basketball Association (NBA) is the world's premier basketball league, which includes 30 teams across the United States and Canada. An example of game theory in the NBA is the "modern NBA", in which teams play stretch 4s (Power Forwards) and 5s (Centers) who can dribble, pass and shoot rather than simply rebound and dunk. This becomes the "strictly dominate" strategy as this allows the "modern" NBA team to shoot a higher volume of 3 pointers, which has a higher expected value per shot (40 Field Goal % times 3 = 1.2) than 2 pointers (50 Field Goal % times 2 = 1). The Golden State Warriors championship run in the 2010s with Kevin Durant and Draymond Green is a great example of this coaching and player construction strategy. Now nearly all NBA teams employ this strategy, so it is less effective (A Nash (Not Steve Nash!) Equilibrium).
|
|
null | false
| null |
The Industrial Revolution was the transition to new manufacturing processes in Great Britain, continental Europe, and the United States, that occurred during the period from around 1760 to about 1820–1840. This transition included going from hand production methods to machines; new chemical manufacturing and iron production processes; the increasing use of water power and steam power; the development of machine tools; and the rise of the mechanized factory system. Output greatly increased, and a result was an unprecedented rise in population and in the rate of population growth. The textile industry was the first to use modern production methods, and textiles became the dominant industry in terms of employment, value of output, and capital invested.
The Industrial Revolution began in Great Britain, and many of the technological and architectural innovations were of British origin. By the mid-18th century, Britain was the world's leading commercial nation, controlling a global trading empire with colonies in North America and the Caribbean. Britain had major military and political hegemony on the Indian subcontinent; particularly with the proto-industrialised Mughal Bengal, through the activities of the East India Company. The development of trade and the rise of business were among the major causes of the Industrial Revolution.
The Industrial Revolution marked a major turning point in history. Comparable only to humanity's adoption of agriculture with respect to material advancement, the Industrial Revolution influenced in some way almost every aspect of daily life. In particular, average income and population began to exhibit unprecedented sustained growth. Some economists have said the most important effect of the Industrial Revolution was that the standard of living for the general population in the Western world began to increase consistently for the first time in history, although others have said that it did not begin to meaningfully improve until the late 19th and 20th centuries. GDP per capita was broadly stable before the Industrial Revolution and the emergence of the modern capitalist economy, while the Industrial Revolution began an era of per-capita economic growth in capitalist economies. Economic historians are in agreement that the onset of the Industrial Revolution is the most important event in human history since the domestication of animals and plants.
The precise start and end of the Industrial Revolution is still debated among historians, as is the pace of economic and social changes.Eric Hobsbawm held that the Industrial Revolution began in Britain in the 1780s and was not fully felt until the 1830s or 1840s, while T. S. Ashton held that it occurred roughly between 1760 and 1830.Rapid industrialisation first began in Britain, starting with mechanized textiles spinning in the 1780s, with high rates of growth in steam power and iron production occurring after 1800. Mechanized textile production spread from Great Britain to continental Europe and the United States in the early 19th century, with important centres of textiles, iron and coal emerging in Belgium and the United States and later textiles in France.
An economic recession occurred from the late 1830s to the early 1840s when the adoption of the Industrial Revolution's early innovations, such as mechanized spinning and weaving, slowed and their markets matured. Innovations developed late in the period, such as the increasing adoption of locomotives, steamboats and steamships, and hot blast iron smelting. New technologies such as the electrical telegraph, widely introduced in the 1840s and 1850s, were not powerful enough to drive high rates of growth. Rapid economic growth began to occur after 1870, springing from a new group of innovations in what has been called the Second Industrial Revolution. These innovations included new steel making processes, mass production, assembly lines, electrical grid systems, the large-scale manufacture of machine tools, and the use of increasingly advanced machinery in steam-powered factories
|
Give me a short summary of the Industrial Revolution
|
The Industrial Revolution began in Great Britain, continental Europe and the United States. It moved production from human intensive to mechanization. It covered the period from about 1760 to 1820-1840. It was a huge advancement that is matched only with humanity's adoption of agriculture and domestication of animals.
|
null | false
| null |
What is the purpose of operating a motor vehicle?
|
The purpose of operating a motor vehicle is to get from point A to point B without incident, regardless of whose fault it might be.
|
|
null | false
| null |
Who is the best golfer in the world?
|
The number one golfer in the official world golf rankings changes often and is determined by the OWGR scoring system. It is widely agreed upon that Tiger Woods is the best golfer ever to play the game, winning 15 major championships in an era with elite competition. Jack Nicklaus won 18 major championships but the competition when he won his major championships was not at the same level that Tiger Woods faced.
|
|
null | false
| 145
|
The goal of an language model is to assign meaningful probabilities to a sequence of words. Given a set of tokens $\mathbf {X}=(x_1,....,x_T)$, where $T$ is the length of a sequence, our task is to estimate the joint conditional probability $P(\mathbf {X})$ which is
were $(x_{1}, \ldots , x_{i-1})$ is the context. An Intrinsic evaluation of the performance of Language Models is perplexity (PPL) which is defined as the inverse probability of the set of the tokens and taking the $T^{th}$ root were $T$ is the number of tokens
In our two approaches we use transformer based architectures: BERT and Transformer-XL as mentioned before. Calculating the auto-regressive $P(\mathbf {X})$ for the transformer-XL is quite straight-forward as the model is unidirectional but it doesn't factorize the same way for a bi-directional model like BERT.
BERT's bi-directional context poses a problem for us to calculate an auto-regressive joint probability. A simple fix could be that we mask all the tokens $\mathbf {x}_{>i}$ and we calculate the conditional factors as we do for an unidirectional model. By doing so though, we loose upon the advantage of bi-directional context the BERT model enables. We propose an approximation of the joint probability as,
This type of approximations has been previously explored with Bi-directional RNN LM's BIBREF9 but not for deep transformer models. We therefore, define a pseudo-perplexity score from the above approximated joint probability.
The original BERT has two training objectives: 'Masked language modelling', in which you mask input tokens randomly and then predict the masked tokens using the left and right context. Additionally, there is the 'next sentence prediction' task that jointly trains text-pair representations. For training the Masked language model the original BERT used Byte Pair Encoding (BPE) BIBREF10 for subword tokenization BIBREF11.For example the rare word "unaffable" to be split up into more frequent subwords such as ["un", "##aff", "##able"]. To remain consistent with experiments performed with LSTM's we use the morfessor for the subword tokenization in the Finnish Language. In Addition, we also apply boundary markers as in (Table TABREF7) and train two separate models using this distinction. We train with left-marked markings as the original BERT was trained with such a scheme and the left+right-marked as it was the previous SOTA with the Finnish Language. For the transformer-XL experiments, we just train with the left+right marked scheme.
The Next Sentence Prediction (NSP) is a binary classification task which predicts whether two segments follow each other in the original text. This pre-training task was proposed to further improve the performance on downstreaming tasks, like Natural Language Inference(NLI) but in reality removing the NSP loss matches or slightly improves the downstream task performance BIBREF12. In this paper, we have omitted the NSP task from the BERT pre-training procedure and changed the input from a SEGMENT-PAIR input to a SINGLE SEGMENT input. As seen in (Fig FIGREF8)
Transformer-XL introduced the notion of recurrence in self-attention by caching the hidden state sequence to compute the hidden states of a new segment. It also introduces a novel relative positional embedding scheme and both of them combined address the issue of fixed context lengths. Transformer-XL as mentioned is a unidirectional deep transformer architecture, therefore the perplexity can be calculated as (Eq DISPLAY_FORM5). The only change is in the input format, were we use sub-word units rather than whole word units as Finnish is morphologically richer than English.
In our two approaches we use transformer based architectures: BERT and Transformer-XL as mentioned before.
|
Which two approaches do the authors use in the project?
|
Transformer based architectures: BERT and Transformer-XL.
|
null | false
| null |
Identify which instrument is string or woodwind: Octofone, Suona
|
Octofone is string, Suona is woodwind.
|
|
null | false
| null |
Classify the following types of cars as "economy" or "luxury": Ford, Chevrolet, Lamborghini, Ferrari, Mercedes, Honda, Lexus, Toyota, Nissan, Subaru
|
Ford, Chevrolet, Honda, Toyota, and Subaru are high-quality car brands, but are generally not considered to be "luxury" brands. Lamborghini, Ferrari, Mercedes, and Lexus are luxury brands.
|
|
null | false
| null |
Identify which instrument is string or percussion: Talking drum, Tambura
|
Tambura is string, Talking drum is percussion.
|
|
null | false
| null |
Who is the antagonist on The X-Files?
|
There are many antagonists on The X-Files, but the most long-running individual antagonist is The Cigarette Smoking Man (CSM) also known as C.G.B. Spender. He was primarily responsible for orchestrating conspiracies. He was a member of "The Syndicate," which was a mysterious shadow government group that covered up the existence of extraterrestrial life.
|
|
null | false
| null |
Write an friendly, convincing e-mail to my landlord asking whether I can get a dog.
|
I am writing to you today to request permission to get a dog into the unit. As current, responsible tenants, we'd like to make every effort to assure you we will not be bothered.
We would like to set up a time to discuss the conditions you have to qualify us as pet owners. At that time, we are happy to discuss any financial requirements you may have in order for us to keep a dog.
We promise to provide appropriate levels of exercise, attention and care for our dog. We will continue to keep a clean apartment, take into consideration our dog's interaction with neighbors and ensure our pet is properly cared for whenever we're unable to take care of it ourselves.
|
|
1912.00423
| false
| null |
To support future large-scale operations, a multi-protocol message passing system was used for inter-module communication. This modular design also allows different components to be swapped out seamlessly, provided they continue to communicate via the expected interface. Routines were developed to simulate input data based on the authors experience with real healthcare data. The reasons for this choice were twofold: One, healthcare data can be high in incidental complexity, requiring one-off code to handle unusual inputs, but not necessarily in such a way as to significantly alter the fundamental engineering choices in a semantic enrichment engine such as this one. Two, healthcare data is strictly regulated, and the process for obtaining access to healthcare data for research can be cumbersome and time-consuming.
A simplified set of input data, in a variety of different formats that occur frequently in a healthcare setting, was used for simulation. In a production setting, the Java module that generates simulation data would be replaced by either a data source that directly writes to the input message queue or a Java module that intercepts or extracts production data, transforms it as needed, and writes it to the input message queue. A component-level view of the systems architecture is shown in Figure FIGREF7
Routines were developed to simulate input data based on the authors experience with real healthcare data. The reasons for this choice were twofold: One, healthcare data can be high in incidental complexity, requiring one-off code to handle unusual inputs, but not necessarily in such a way as to significantly alter the fundamental engineering choices in a semantic enrichment engine such as this one. Two, healthcare data is strictly regulated, and the process for obtaining access to healthcare data for research can be cumbersome and time-consuming.
A simplified set of input data, in a variety of different formats that occur frequently in a healthcare setting, was used for simulation.
|
What type of simulations of real-time data feeds are used for validaton?
|
The answers are shown as follows:
* simplified set of input data, in a variety of different formats that occur frequently in a healthcare setting
|
2003.03612
| false
| null |
The present work: Binomials in large-scale online text. In this work, we use data from large-scale Internet text corpora to study binomials at a massive scale, drawing on text created by millions of users. Our approach is more wholesale than prior work - we focus on all binomials of sufficient frequency, without first restricting to small samples of binomials that might be frozen. We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time. Furthermore, the subject matter on Reddit leads to many lists about people and organizations that lets us study orderings of proper names — a key setting for word ordering which has been difficult to study by other means.
We draw our data from news publications, wine reviews, and Reddit, which in addition to large volume, also let us characterize binomials in new ways, and analyze differences in binomial orderings across communities and over time.
|
What online text resources are used to test binomial lists?
|
The answers are shown as follows:
* news publications, wine reviews, and Reddit
|
null | false
| null |
What is a CPT as it relates to Healthcare?
|
A CPT is an acronym for Common Procedural Terminology and it is a standard for billing in the medical industry. For example, CPT code 77427 is for radiation treatment management and is billed once per five treatment sessions regardless of the actual time period in which the services are furnished. The services need not be furnished on consecutive days.
|
|
1904.02357
| false
| null |
The Title-to-Story system is a baseline, which generates directly from topic.
The Title-to-Story system is a baseline, which generates directly from topic.
|
What are the baselines?
|
The answers are shown as follows:
* Title-to-Story system
|
null | false
| null |
Should I go to the beach or mountains for my next vacation?
|
We have vacation homes in many locations to fit your needs! You can enjoy warmer temperatures at the beach with beautiful sunsets across the water. Go for a run on the sand and stop by the many local shops and boutiques as you mingle through the day. Are you an adventure seeker? If so, check out our mountain rental homes. You can go mountain biking, hiking, or even skiing, depending on the time of year. A mountain vacation is a bit more of a thrill seeker dream but can be relaxing as well as spending time in nature.
|
|
null | false
| 92
|
Table TABREF13 reports the main results. In addition to BERT BIBREF10 and RoBERTa BIBREF25, we also include XLNet BIBREF30 as an example of a strong, but different, model architecture. We show test set performance on the ANLI test sets per round, the total ANLI test set, and the exclusive test subset (examples from test-set-exclusive workers). We also show accuracy on the SNLI test set and the MNLI development (for the purpose of comparing between different model configurations across table rows) set. In what follows, we briefly discuss our observations.
Notice that the base model for each round performs very poorly on that round’s test set.
|
Does the base model for each round perform well on that round's test set?
|
No, it doesn't.
|
null | false
| 326
|
Deep learning models have been widely used in many natural language processing (NLP) tasks. A major challenge is how to design and learn the semantic composition function while modeling a text sequence. The typical composition models involve sequential BIBREF0 , BIBREF1 , convolutional BIBREF2 , BIBREF3 , BIBREF4 and syntactic BIBREF5 , BIBREF6 , BIBREF7 compositional models.
In spite of their success, these models have two major limitations. First, they usually use a shared composition function for all kinds of semantic compositions, even though the compositions have different characteristics in nature. For example, the composition of the adjective and the noun differs significantly from the composition of the verb and the noun. Second, different composition functions are learned from scratch in different tasks. However, given a certain natural language, its composition functions should be the same (on meta-knowledge level at least), even if the tasks are different.
To address these problems, we need to design a dynamic composition function which can vary with different positions and contexts in a sequence, and share it across the different tasks. To share some meta-knowledge of composition function, we can adopt the multi-task learning BIBREF8 . However, the sharing scheme of most neural multi-task learning methods is feature-level sharing, where a subspace of the feature space is shared across all the tasks. Although these sharing schemes are successfully used in various NLP tasks BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , they are not suitable to share the composition function.
In this paper, inspired by recent work on dynamic parameter generation BIBREF15 , BIBREF16 , BIBREF17 , we propose a function-level sharing scheme for multi-task learning, in which a shared meta-network is used to learn the meta-knowledge of semantic composition among the different tasks. The task-specific semantic composition function is generated by the meta-network. Then the task-specific composition function is used to obtain the task-specific representation of a text sequence. The difference between two sharing schemes is shown in Figure 1 . Specifically, we use two LSTMs as meta and basic (task-specific) network respectively. The meta LSTM is shared for all the tasks. The parameters of the basic LSTM are generated based on the current context by the meta LSTM, therefore the composition function is not only task-specific but also position-specific. The whole network is differentiable with respect to the model parameters and can be trained end-to-end.
We demonstrate the effectiveness of our architectures on two kinds of NLP tasks: text classification and sequence tagging. Experimental results show that jointly learning of multiple related tasks can improve the performance of each task relative to learning them independently.
Our contributions are of three-folds:
Then the task-specific composition function is used to obtain the task-specific representation of a text sequence.
|
What is the task-specific composition function used to do?
|
To obtain the task-specific representation of a text sequence.
|
null | false
| 62
|
Neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 has enabled end-to-end training of a translation system without needing to deal with word alignments, translation rules, and complicated decoding algorithms, which are the characteristics of phrase-based statistical machine translation (PBSMT) BIBREF3 . Although NMT can be significantly better than PBSMT in resource-rich scenarios, PBSMT performs better in low-resource scenarios BIBREF4 . Only by exploiting cross-lingual transfer learning techniques BIBREF5 , BIBREF6 , BIBREF7 , can the NMT performance approach PBSMT performance in low-resource scenarios.
However, such methods usually require an NMT model trained on a resource-rich language pair like French INLINEFORM0 English (parent), which is to be fine-tuned for a low-resource language pair like Uzbek INLINEFORM1 English (child). On the other hand, multilingual approaches BIBREF8 propose to train a single model to translate multiple language pairs. However, these approaches are effective only when the parent target or source language is relatively resource-rich like English (En). Furthermore, the parents and children models should be trained on similar domains; otherwise, one has to take into account an additional problem of domain adaptation BIBREF9 .
In this paper, we work on a linguistically distant and thus challenging language pair Japanese INLINEFORM0 Russian (Ja INLINEFORM1 Ru) which has only 12k lines of news domain parallel corpus and hence is extremely resource-poor. Furthermore, the amount of indirect in-domain parallel corpora, i.e., Ja INLINEFORM2 En and Ru INLINEFORM3 En, are also small. As we demonstrate in Section SECREF4 , this severely limits the performance of prominent low-resource techniques, such as multilingual modeling, back-translation, and pivot-based PBSMT. To remedy this, we propose a novel multistage fine-tuning method for NMT that combines multilingual modeling BIBREF8 and domain adaptation BIBREF9 .
We have addressed two important research questions (RQs) in the context of extremely low-resource machine translation (MT) and our explorations have derived rational contributions (CTs) as follows:
To the best of our knowledge, we are the first to perform such an extensive evaluation of extremely low-resource MT problem and propose a novel multilingual multistage fine-tuning approach involving multilingual modeling and domain adaptation to address it.
To remedy this, we propose a novel multistage fine-tuning method for NMT that combines multilingual modeling and domain adaptation.
|
What kind of method it is?
|
A method that combines multilingual modeling and domain adaptation.
|
null | false
| null |
What kind of dog should I get?
|
There are many dog breeds to choose from. Choosing a dog breed is a personal choice. Consider what kind of lifestyle you live and pick a dog that fits your lifestyle. For example, if you are allergic to dogs you may consider a poodle, or poodle mix as they tend to be hypoallergenic.
|
|
null | false
| null |
Why are people afraid of sharks?
|
Many people are afraid of sharks from irrational fear from media, and from losing control. Only about 80 humans are attacked by sharks each year, with a very low death rate. Galeophobia is the term used for being afraid of sharks.
|
|
1909.02480
| false
| null |
We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8.
We first conduct experiments to compare the performance of FlowSeq with strong baseline models, including NAT w/ Fertility BIBREF6, NAT-IR BIBREF7, NAT-REG BIBREF25, LV NAR BIBREF26, CTC Loss BIBREF27, and CMLM BIBREF8.
|
What non autoregressive NMT models are used for comparison?
|
The answers are shown as follows:
* NAT w/ Fertility
* NAT-IR
* NAT-REG
* LV NAR
* CTC Loss
* CMLM
|
null | false
| null |
give me a list of ways to cool down
|
- drink a slushy
- take a cold shower
- dip in glacier water
- take your clothes off
- participate in a polar bear plunge
- stand very still
- take shelter under shade
- sit in front of a fan
- roll down the windows in a moving car
|
|
null | false
| null |
Which characters belong to DC or Marvel Universe? Nova Corps or Green Lantern Corps
|
Nova Corps is Marvel, Green Lantern Corps is DC
|
|
null | false
| 459
|
Language games were first introduced by to explore the meanings of language utterances. Instantiating this concept with the signalling game design from enables linguists to explore the emergence of linguistic structure where artificial languages are represented as symbolic systems. The success of deep learning (DL) models on complicated cognitive tasks then inspired researchers to apply DL-based models to language games to investigate the agents' ability to invent communication protocols without preset rules (e.g..
In the existing works (e.g., there are usually two types of agents, speakers that emit messages based on their observations (i.e. input target objects) and listeners that receive messages and act accordingly, as illustrated in Figure. Based on the goals for listeners, we can categorise most of the games into the following three types: 1) referential games in which listeners need to select the target object observed by the speaker among a set of candidate objects (candidates for short) (e.g.; 2) reconstruction games in which listeners need to reconstruct the speaker's observation (e.g. ; and 3) navigation games in which listeners need to go to the location specified by speakers (e.g.. We focus on referential games, illustrated in Figure, which have been well investigated in both the linguistic community (e.g. and emergent communication community (e.g..
It is reasonable to assume that listeners in referential games can differentiate the target object from the other distractors in the context as long as the speaker's message encodes some unique feature of the target. Therefore, the speaker's messages must convey a different amount of information for listeners to complete their task in games where candidates are similar in different degrees. For example, to distinguish two very similar candidates, e.g. blue wide-stripped shirt and blue narrow-stripped shirt, it requires more information about the target to be encoded by the speaker's message than to distinguish a shirt from a cup. Experiments with human participants show that emergent languages are sensitive to the requirements of the communicative tasks for which they are used, with languages developing in which only the necessary discriminatory information is encoded.
˚Correspondence author, e-mail address: s.guo@ed.ac.uk Following, we refer to such ability to encode information about input space (e.g. structure, dimensionality, distribution of samples) as expressivity. In this work, we explore the factors that influence expressivity in the framework of DL-based referential games, and we argue that it is important to understand these factors such that agents could develop languages that are effective enough for completing various tasks.
Our contribution in this work is threefold. First, we propose and verify a hypothesis about the determining factors of expressivity of emergent languages under the DL-based framework. Following, we show that context size (i.e. the number of candidates the listener must select among when interpreting the speaker's message) influence both the complexity and unpredictability of that context. Complexity refers to the similarity between the items in the context: in order for the listener to select the target, more information needs to be encoded in the speaker's signal if the context contains more similar distractors for a given target. Unpredictability refers to the extent to which the information that needs to be encoded by the speaker is stable across contexts in successive episodes of communication: in games where the information required by the listener differs from trial (training epoch for DL agents) to trial, the speaker needs to encode more information on every trial in order to be sure to encode the information the listener needs. As we show in Section 3.1, complexity and unpredictability are affected differently by the context size: as context size increases, complexity increases but unpredictability decreases. Therefore, we propose and verify the following hypothesis about the determining factors of expressivity:
[HYPOTHESIS] The expressivity of emergent languages is a trade-off between the complexity and the unpredictability of context in language games.
[1]
Our second contribution is a novel measure of expressivity based on partial ordering of languages in terms of their generalisation performances across tasks. Although expressivity is related to the amount of information encoded in a language, we illustrate that mutual information (MI) is not an appropriate measurement for expressivity in Section 3.2. Considering that one of our objectives is to facilitate a universally useful emergent language, we propose to measure expressivity of a language based on the generalisation performance of listening agents trained with that language across different games. Since it is more challenging to quantitatively and directly measure expressivity, we focus on the partial ordering between languages in this first step towards understanding expressivity.
Our final contribution is the discovery of message type collapse, i.e. the number of unique messages is significantly lower than the size of input space in relatively easy games, which can lead to highly ambiguous emergent languages. To overcome the technical limitations imposed by GPU memory size on large-scale referential games, we introduce the contrastive loss proposed by in referential games. While comparing the behaviour of the contrastive loss and the loss function used in previous works (e.g, we find that this contrastive loss can greatly alleviate the collapse of message types, leading to more disambiguous and potentially more expressive emergent languages.
As we show in Section 3.1,complexity and unpredictability are affected differently by the context size: as context size increases,complexity increases but unpredictability decreases.
|
Why is it okay that we define unpredictability in terms of 1 step (e --> e+1) rather than all of the previous trials, in the order the trial occurred?
|
Like we said in our previous reply, no matter how many timesteps we use to define the unpredictability, the unpredictability would still decrease as the size of candidate set increases. Therefore, our conclusion still holds with a different number of time steps.
|
null | false
| null |
How many angels can dance on the head of a pin?
|
This exact question puzzled theologians for many hundreds of years, and led to many differing points of view. The issue with the question is that there is no correct answer- firstly, are angels real? If they are, how big are they? are they even able to dance? if they could dance, how much space would they need? Secondly, how big is the pin? is the pin a normal sized Earthly pin? or something much larger, on a celestial scale?
The answer relates to faith- if you believe in angels, you believe in their ability to dance, and believe in their specific ability to dance upon a pin, and can have an argument in any direction based on your faith that they underlying premise of the question could be a likelihood. If you do not have that faith, then your outlook would likely view this question with a great degree of cynicism, and you would probably view it as a pointless question.
But with a degree of humour, and a tip of the hat to Douglas Adams, I'm happy to conclude that the answer is 42.
|
|
1607.06275
| false
| null |
We compare our model with two sets of baselines:
MemN2N BIBREF12 is an end-to-end trainable version of memory networks BIBREF9 . It encodes question and evidence with a bag-of-word method and stores the representations of evidences in an external memory. A recurrent attention model is used to retrieve relevant information from the memory to answer the question.
Attentive and Impatient Readers BIBREF6 use bidirectional LSTMs to encode question and evidence, and do classification over a large vocabulary based on these two encodings. The simpler Attentive Reader uses a similar way as our work to compute attention for the evidence. And the more complex Impatient Reader computes attention after processing each question word.
We compare our model with two sets of baselines:
MemN2N BIBREF12 is an end-to-end trainable version of memory networks BIBREF9 .
Attentive and Impatient Readers BIBREF6 use bidirectional LSTMs to encode question and evidence, and do classification over a large vocabulary based on these two encodings.
|
What are the baselines?
|
The answers are shown as follows:
* MemN2N BIBREF12
* Attentive and Impatient Readers BIBREF6
|
null | false
| null |
When Mohammad inherited the newly born sultanate from his father Alauddin, the land was still infested with thieves and robbers. As such he spent the entire part of his reign in establishing law in his land. He ordered his governors to never give refuge to anyone who defied his authority. This campaign was successful and at the end of his reign,the land became peaceful and law abiding. He also sat on the takht-ē-firoza or The Turquoise throne which was gifted to him by Kapaya Nayaka.
|
Based on this paragraph, what was the main focus of Mohammad Shah I's reign over his sultanate?
|
Mohammad was mainly focused on establishing law and order within his territory during his reign in order to remove thieves and robbers.
|
null | false
| null |
Who turned oil drums into steel drums for music?
|
Traditionally, West Indians
|
|
null | false
| null |
Why is the Chrysler Hemi V8 so legendary?
|
The Chrysler "Hemi" V8 are a series of American V8 internal combustion engines known for their simplicity, reliability, low cost, high power and torque output. Originally the Hemi was named for the Hemispherical shaped combustion chamber that increased power and efficiency. The very first Hemi engines were originally developed during World War II for the Republic P-47 and the M47 Patton Tank and they were first introduced into passenger cars in 1951 as the Chrysler FirePower.
The Hemi is now in its third generation (3G) being built from 2003 to the present at Chrysler's Saltillo Engine plant in Ramos Arizpe Mexico. The 3G Hemi's have had power outputs from 345 horsepower in the first 3G 5.7 Liter to 1025 horsepower in the 6.2 High Output Demon 170 engine. The latest 6.2 Hemi in the Demon 170 is now one of the most powerful engines installed in a passenger car in the world at a fraction of the cost of other 1000 horsepower vehicles.
All of these factors have added up to make a tough, powerful and extremely desirable engine for automotive enthusiasts around the world cementing the Hemi Legend.
|
|
null | false
| null |
Classify these vehicles by which go in the water or on land: dune buggy, cruise ship, pirate ship, bulldozer, submarine, sailboat, truck, car, ATV
|
Water: cruise ship, pirate ship, submarine, sailboat
Land: dune buggy, bulldozer, truck, car, ATV
|
|
null | false
| null |
Name five states in the USA that don't touch the ocean.
|
Five states in the USA that don't touch the ocean are Iowa, Illinois, Indiana, Ohio, and Utah.
|
|
null | false
| null |
What is Purchasing power parity (PPP)?
|
Purchasing power parity (PPP) is a measurement of the price of specific goods in different countries and is used to compare the absolute purchasing power of the countries' currencies. PPP is effectively the ratio of the price of a basket of goods at one location divided by the price of the basket of goods at a different location. The PPP inflation and exchange rate may differ from the market exchange rate because of tariffs, and other transaction costs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.