question
stringlengths
6
3.53k
text
stringlengths
17
2.05k
source
stringclasses
1 value
Consider the (toy) grammar $G$ consisting of the following rules: R1: S --> NP VP R2: NP --> NN R3: NP --> Det NN R4: NN --> N R5: NN --> NN NN R6: NN --> NN PNP R7: PNP --> Prep NP R8: VP --> V R9: VP --> Adv V Indicate what type of constraints are (resp. are not) taken into account by the grammar $G$, and, for each constraint type mentioned, provide illustrative examples.
e.g. Constraints may be semantic; rejecting "The apple is angry." e.g. Syntactic; rejecting "Red is apple the. "Constraints are often represented by grammar.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the (toy) grammar $G$ consisting of the following rules: R1: S --> NP VP R2: NP --> NN R3: NP --> Det NN R4: NN --> N R5: NN --> NN NN R6: NN --> NN PNP R7: PNP --> Prep NP R8: VP --> V R9: VP --> Adv V Indicate what type of constraints are (resp. are not) taken into account by the grammar $G$, and, for each constraint type mentioned, provide illustrative examples.
It is proposed that children will apply the morphophonological constraint to subclasses of alternating verbs that are all from the native class (monosyllabic). If the set of alternating verbs are not all from the native class, then the child will not apply the morphophonological constraint. This account correctly predicts that the morphophonological constraint could apply to some semantic subclasses, but not others. For example, children would apply the constraint to the following five subclasses of alternating verbs: Children would not apply the constraint to the class of "future having" verbs because they are not all from the native (monosyllabic) class, thereby allowing the following DOC examples to be well-formed: (15) John assigned/allotted/guaranteed/bequeathed Mary four tickets. : 247–249
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The objective of this question is to illustrate the use of a lexical semantics resource to compute lexical cohesion. Consider the following toy ontology providing a semantic structuring for a (small) set of nouns: <ontology> <node text='all'> <children> <node text='animate entities'> <children> <node text='human beings'> <children> <node text='man'></node> <node text='woman'></node> <node text='child'></node> </children> </node> <node text='animals'> <children> <node text='cat'></node> <node text='dog'></node> <node text='mouse'></node> </children> </node> </children> </node> <node text='non animate entities'> <children> <node text='abstract entities'> <children> <node text='freedom'></node> <node text='happiness'></node> </children> </node> <node text='concrete entities'> <children> <node text='table'></node> <node text='pen'></node> <node text='mouse'></node> </children> </node> </children> </node> </children> </node> </ontology> We want to use lexical cohesion to decide whether the provided text consists of one single topical segment corresponding to both sentences, or of two distinct topical segments, each corresponding to one of the sentences. Let's define the lexical cohesion of any set of words (in canonical form) as the average lexical distance between all pairs of words present in the set2. The lexical distance between any two words is be defined as the length of a shortest path between the two words in the available ontology. For example, 'freedom' and 'happiness' are at distance 2 (length, i.e. number of links, of the path: happiness −→ abstract entities −→ freedom), while 'freedom' and 'dog' are at distance 6 (length of the path: freedom −→ abstract entities −→ non animate entities −→ all −→ animate entities −→ animals −→ dog) Compute the lexical distance between all the pairs of words present in the above text and in the provided ontology (there are 6 such pairs).
Recent advances in methods of lexical semantic relatedness - a survey. Natural Language Engineering 19 (4), 411–479, Cambridge University Press Book: S. Harispe, S. Ranwez, S. Janaqi, J. Montmain. 2015. Semantic Similarity from Natural Language and Ontology Analysis, Morgan & Claypool Publishers.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The objective of this question is to illustrate the use of a lexical semantics resource to compute lexical cohesion. Consider the following toy ontology providing a semantic structuring for a (small) set of nouns: <ontology> <node text='all'> <children> <node text='animate entities'> <children> <node text='human beings'> <children> <node text='man'></node> <node text='woman'></node> <node text='child'></node> </children> </node> <node text='animals'> <children> <node text='cat'></node> <node text='dog'></node> <node text='mouse'></node> </children> </node> </children> </node> <node text='non animate entities'> <children> <node text='abstract entities'> <children> <node text='freedom'></node> <node text='happiness'></node> </children> </node> <node text='concrete entities'> <children> <node text='table'></node> <node text='pen'></node> <node text='mouse'></node> </children> </node> </children> </node> </children> </node> </ontology> We want to use lexical cohesion to decide whether the provided text consists of one single topical segment corresponding to both sentences, or of two distinct topical segments, each corresponding to one of the sentences. Let's define the lexical cohesion of any set of words (in canonical form) as the average lexical distance between all pairs of words present in the set2. The lexical distance between any two words is be defined as the length of a shortest path between the two words in the available ontology. For example, 'freedom' and 'happiness' are at distance 2 (length, i.e. number of links, of the path: happiness −→ abstract entities −→ freedom), while 'freedom' and 'dog' are at distance 6 (length of the path: freedom −→ abstract entities −→ non animate entities −→ all −→ animate entities −→ animals −→ dog) Compute the lexical distance between all the pairs of words present in the above text and in the provided ontology (there are 6 such pairs).
Semeval-2012 task 6: A pilot on semantic textual similarity. E. Agirre, D. Cer, M. Diab, A. Gonzalez-Agirre. *SEM 2012: The First Joint Conference on Lexical and Computational Semantics–Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012) Predictive linguistic features of schizophrenia. ES Kayi, M Diab, L Pauselli, M Compton, G Coppersmith.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
we'd like to do some sentence topic classification using a Naive-Bayes model. Consider the following toy learning corpus, where each sentence has been assigned a topic, either "Medical" or "Computer": \item Medical: plastic surgery process initial consultation can be scheduled by sending an email to the administration. \item Medical: in the process, the laser beam comes into contact with soft tissues. \item Medical: laser eye surgery process reshapes parts of the cornea by removing tiny amount of tissues. \item Computer: the team behind the laser based quantum computer includes scientists from the US, Australia and Japan. \item Computer: the optical laser barcode scanner was plugged on the USB port. \item Computer: cdrom laser lens cleaning process starts with opening the tray. \item Computer: laser was a computer trademark. The parameters are learned using some appropriate additive smoothing with the same value for all parameters. In the above learning corpus, there are $42$ token occurrences in "Medical" documents and $42$ token occurrences in "Computer" documents (punctuation is ignored). How would the following short sentence: "pulsed laser used for surgery process" be classified by this model?
There is an increasing interest in text mining and information extraction strategies applied to the biomedical and molecular biology literature due to the increasing number of electronically available publications stored in databases such as PubMed. Decision tree learning – Sentence extraction – Terminology extraction – Latent semantic indexing – Lemmatisation – groups together all like terms that share a same lemma such that they are classified as a single item. Morphological segmentation – separates words into individual morphemes and identifies the class of the morphemes.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
we'd like to do some sentence topic classification using a Naive-Bayes model. Consider the following toy learning corpus, where each sentence has been assigned a topic, either "Medical" or "Computer": \item Medical: plastic surgery process initial consultation can be scheduled by sending an email to the administration. \item Medical: in the process, the laser beam comes into contact with soft tissues. \item Medical: laser eye surgery process reshapes parts of the cornea by removing tiny amount of tissues. \item Computer: the team behind the laser based quantum computer includes scientists from the US, Australia and Japan. \item Computer: the optical laser barcode scanner was plugged on the USB port. \item Computer: cdrom laser lens cleaning process starts with opening the tray. \item Computer: laser was a computer trademark. The parameters are learned using some appropriate additive smoothing with the same value for all parameters. In the above learning corpus, there are $42$ token occurrences in "Medical" documents and $42$ token occurrences in "Computer" documents (punctuation is ignored). How would the following short sentence: "pulsed laser used for surgery process" be classified by this model?
Here is another model, with a different set of issues. This is an implementation of an unsupervised Naive Bayes model for document clustering. That is, we would like to classify documents into multiple categories (e.g. "spam" or "non-spam", or "scientific journal article", "newspaper article about finance", "newspaper article about politics", "love letter") based on textual content.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following sentence: High-energy pulsed laser beams are used in soft-tissue surgery. Using a 2-gram language model and a tokenizer that splits on whitespaces and punctuation (including hyphens (-)), what is the probability of the above sentence? Provide your answer as a formula, but clearly explaining each variable.
Let c ( w , w ′ ) {\displaystyle c(w,w')} be the number of occurrences of the word w {\displaystyle w} followed by the word w ′ {\displaystyle w'} in the corpus. The equation for bigram probabilities is as follows: p K N ( w i | w i − 1 ) = max ( c ( w i − 1 , w i ) − δ , 0 ) ∑ w ′ c ( w i − 1 , w ′ ) + λ w i − 1 p K N ( w i ) {\displaystyle p_{KN}(w_{i}|w_{i-1})={\frac {\max(c(w_{i-1},w_{i})-\delta ,0)}{\sum _{w'}c(w_{i-1},w')}}+\lambda _{w_{i-1}}p_{KN}(w_{i})} Where the unigram probability p K N ( w i ) {\displaystyle p_{KN}(w_{i})} depends on how likely it is to see the word w i {\displaystyle w_{i}} in an unfamiliar context, which is estimated as the number of times it appears after any other word divided by the number of distinct pairs of consecutive words in the corpus: p K N ( w i ) = | { w ′: 0 < c ( w ′ , w i ) } | | { ( w ′ , w ″ ): 0 < c ( w ′ , w ″ ) } | {\displaystyle p_{KN}(w_{i})={\frac {|\{w':0
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following sentence: High-energy pulsed laser beams are used in soft-tissue surgery. Using a 2-gram language model and a tokenizer that splits on whitespaces and punctuation (including hyphens (-)), what is the probability of the above sentence? Provide your answer as a formula, but clearly explaining each variable.
Let c ( w , w ′ ) {\displaystyle c(w,w')} be the number of occurrences of the word w {\displaystyle w} followed by the word w ′ {\displaystyle w'} in the corpus. The equation for bigram probabilities is as follows: p K N ( w i | w i − 1 ) = max ( c ( w i − 1 , w i ) − δ , 0 ) ∑ w ′ c ( w i − 1 , w ′ ) + λ w i − 1 p K N ( w i ) {\displaystyle p_{KN}(w_{i}|w_{i-1})={\frac {\max(c(w_{i-1},w_{i})-\delta ,0)}{\sum _{w'}c(w_{i-1},w')}}+\lambda _{w_{i-1}}p_{KN}(w_{i})} Where the unigram probability p K N ( w i ) {\displaystyle p_{KN}(w_{i})} depends on how likely it is to see the word w i {\displaystyle w_{i}} in an unfamiliar context, which is estimated as the number of times it appears after any other word divided by the number of distinct pairs of consecutive words in the corpus: p K N ( w i ) = | { w ′: 0 < c ( w ′ , w i ) } | | { ( w ′ , w ″ ): 0 < c ( w ′ , w ″ ) } | {\displaystyle p_{KN}(w_{i})={\frac {|\{w':0
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The objective of this question is to illustrate the use of a lexical semantics resource to compute lexical cohesion. Consider the following toy ontology providing a semantic structuring for a (small) set of nouns: <ontology> <node text='all'> <children> <node text='animate entities'> <children> <node text='human beings'> <children> <node text='man'></node> <node text='woman'></node> <node text='child'></node> </children> </node> <node text='animals'> <children> <node text='cat'></node> <node text='dog'></node> <node text='mouse'></node> </children> </node> </children> </node> <node text='non animate entities'> <children> <node text='abstract entities'> <children> <node text='freedom'></node> <node text='happiness'></node> </children> </node> <node text='concrete entities'> <children> <node text='table'></node> <node text='pen'></node> <node text='mouse'></node> </children> </node> </children> </node> </children> </node> </ontology> The word 'mouse' appears at two different places in the toy ontology. What does this mean? What specific problems does it raise when the ontology is used? How could such problems be solved? (just provide a sketch of explanation.)
Recent advances in methods of lexical semantic relatedness - a survey. Natural Language Engineering 19 (4), 411–479, Cambridge University Press Book: S. Harispe, S. Ranwez, S. Janaqi, J. Montmain. 2015. Semantic Similarity from Natural Language and Ontology Analysis, Morgan & Claypool Publishers.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The objective of this question is to illustrate the use of a lexical semantics resource to compute lexical cohesion. Consider the following toy ontology providing a semantic structuring for a (small) set of nouns: <ontology> <node text='all'> <children> <node text='animate entities'> <children> <node text='human beings'> <children> <node text='man'></node> <node text='woman'></node> <node text='child'></node> </children> </node> <node text='animals'> <children> <node text='cat'></node> <node text='dog'></node> <node text='mouse'></node> </children> </node> </children> </node> <node text='non animate entities'> <children> <node text='abstract entities'> <children> <node text='freedom'></node> <node text='happiness'></node> </children> </node> <node text='concrete entities'> <children> <node text='table'></node> <node text='pen'></node> <node text='mouse'></node> </children> </node> </children> </node> </children> </node> </ontology> The word 'mouse' appears at two different places in the toy ontology. What does this mean? What specific problems does it raise when the ontology is used? How could such problems be solved? (just provide a sketch of explanation.)
In ontologies designed to serve natural language processing (NLP) and natural language understanding (NLU) systems, ontology concepts are usually connected and symbolized by terms. This kind of connection represents a linguistic realization. Terms are words or a combination of words (multi-word units), in different languages, used to describe in natural language an element from reality, and hence connected to that formal ontology concept that frames this element in reality. The lexicon, the collection of terms and their inflections assigned to the concepts and relationships in an ontology, forms the ‘ontology interface to natural language’, the channel through which the ontology can be accessed from a natural language input.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Why is natural language processing difficult?Select all that apply.You will get a penalty for wrong answers.
Missing punctuation and the use of non-standard words can often hinder standard natural language processing tools such as part-of-speech tagging and parsing. Techniques to both learn from the noisy data and then to be able to process the noisy data are only now being developed.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Why is natural language processing difficult?Select all that apply.You will get a penalty for wrong answers.
Most of the more successful systems use lexical statistics (that is, they consider the identities of the words involved, as well as their part of speech). However such systems are vulnerable to overfitting and require some kind of smoothing to be effective.Parsing algorithms for natural language cannot rely on the grammar having 'nice' properties as with manually designed grammars for programming languages. As mentioned earlier some grammar formalisms are very difficult to parse computationally; in general, even if the desired structure is not context-free, some kind of context-free approximation to the grammar is used to perform a first pass.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then, whenever your editor pitches you a title for a column topic, you'll just be able to give the title to your story generation system, produce the text body of the column, and publish it to the website! You initialize your model with a vocabulary $V$ with $|V|$ tokens. Given a vector of scores $S = [s_1, \ldots, s_i, \ldots, s_{|V|}]$ output by your model for each token in your vocabulary, write out the softmax function to convert score $s_1$ to a probability mass $P(s_1)$
I had just read an article on writing adventures, and I thought about doing my own article on adventure writing. I did start on the article, and one of the examples of how varied puzzles can be is a mathematical adventure where the player has to "use a probability function to cross a field of improbability to get to a vortex." Sadly the article was never finished, although remnants of it can be found in the ADV.DOC file.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then, whenever your editor pitches you a title for a column topic, you'll just be able to give the title to your story generation system, produce the text body of the column, and publish it to the website! You initialize your model with a vocabulary $V$ with $|V|$ tokens. Given a vector of scores $S = [s_1, \ldots, s_i, \ldots, s_{|V|}]$ output by your model for each token in your vocabulary, write out the softmax function to convert score $s_1$ to a probability mass $P(s_1)$
An individual or small business might have this publishing process: Brainstorm content ideas to publish, where to publish, and when to publish Write each piece of content based on the publication schedule Edit each piece of content Publish each piece of contentA larger group might have this publishing process: Brainstorm content ideas to publish, where to publish, and when to publish; include backup content items for each piece of content; include dates to determine whether to delay or kill each content item (for example, if a writer becomes ill or an interview subject is unavailable) Assign each piece of content based on the publication schedule Write each piece of content Review the first draft of each piece of content Give "go" or "no go" decision based on first draft edit and other criteria (then adjust the publishing schedule as needed) If you go, finish writing each piece of content and submit draft content to the layout team, so they can plan their work Perform final edit, copy edit, fact checking, and rewrites as needed Submit piece of content for review by legal team Make changes if or as needed based on legal input Submit piece of content formally to layout team for their creation of artwork to be included with the published content Post content on a development or test server and make final changes if needed Publish content on the production server or other mediaWhether the publishing process is simple or complex, the movement is forward and iterative. Publishers encounter and cross a number of hurdles before a piece of content appears in print, on a website or blog, or in a social media outlet like Twitter or Facebook. The details included and tracked in an editorial calendar depend upon the steps involved in publishing content for a publication, as well as what is useful to track. Too little or too much data make editorial calendars difficult to maintain and use. Some amount of tweaking of editorial calendar elements, while using the calendar to publish content, is required before they can be truly useful.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In an automated email router of a company, we want to make the distinction between three kind of emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a Naive Bayes approach. What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'?
Naive Bayes classifiers are a popular statistical technique of e-mail filtering. They typically use bag-of-words features to identify email spam, an approach commonly used in text classification. Naive Bayes classifiers work by correlating the use of tokens (typically words, or sometimes other things), with spam and non-spam e-mails and then using Bayes' theorem to calculate a probability that an email is or is not spam. Naive Bayes spam filtering is a baseline technique for dealing with spam that can tailor itself to the email needs of individual users and give low false positive spam detection rates that are generally acceptable to users. It is one of the oldest ways of doing spam filtering, with roots in the 1990s.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In an automated email router of a company, we want to make the distinction between three kind of emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a Naive Bayes approach. What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'?
In statistics, naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (naive) independence assumptions between the features (see Bayes classifier). They are among the simplest Bayesian network models, but coupled with kernel density estimation, they can achieve high accuracy levels.Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression,: 718 which takes linear time, rather than by expensive iterative approximation as used for many other types of classifiers. In the statistics literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but naive Bayes is not (necessarily) a Bayesian method.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following CFG \(\text{S} \rightarrow \text{NP VP PNP}\) \(\text{NP} \rightarrow \text{Det N}\) \(\text{NP} \rightarrow \text{Det Adj N}\) \(\text{VP} \rightarrow \text{V}\) \(\text{VP} \rightarrow \text{Aux Ving}\) \(\text{VP} \rightarrow \text{VP NP}\) \(\text{VP} \rightarrow \text{VP PNP}\) \(\text{PNP} \rightarrow \text{Prep NP}\) and the following lexicon: the:Det, red:Adj, cat:N, is:Aux, meowing:Ving, on:Prep, roof:N The next four questions ask you the content of a given cell of the chart used by the CYK algorithm (used here as a recognizer) for the input sentence the red cat is meowing on the roof Simply answer "empty'' if the corresponding cell is empty and use a comma to separate your answers when the cell contains several objects.What is the content of the cell at row 3 column 6 (indexed as in the lectures)?
This is an example grammar: S ⟶ NP VP VP ⟶ VP PP VP ⟶ V NP VP ⟶ eats PP ⟶ P NP NP ⟶ Det N NP ⟶ she V ⟶ eats P ⟶ with N ⟶ fish N ⟶ fork Det ⟶ a {\displaystyle {\begin{aligned}{\ce {S}}&\ {\ce {->NP\ VP}}\\{\ce {VP}}&\ {\ce {->VP\ PP}}\\{\ce {VP}}&\ {\ce {->V\ NP}}\\{\ce {VP}}&\ {\ce {->eats}}\\{\ce {PP}}&\ {\ce {->P\ NP}}\\{\ce {NP}}&\ {\ce {->Det\ N}}\\{\ce {NP}}&\ {\ce {->she}}\\{\ce {V}}&\ {\ce {->eats}}\\{\ce {P}}&\ {\ce {->with}}\\{\ce {N}}&\ {\ce {->fish}}\\{\ce {N}}&\ {\ce {->fork}}\\{\ce {Det}}&\ {\ce {->a}}\end{aligned}}} Now the sentence she eats a fish with a fork is analyzed using the CYK algorithm. In the following table, in P {\displaystyle P} , i is the number of the row (starting at the bottom at 1), and j is the number of the column (starting at the left at 1). For readability, the CYK table for P is represented here as a 2-dimensional matrix M containing a set of non-terminal symbols, such that Rk is in M {\displaystyle M} if, and only if, P {\displaystyle P} . In the above example, since a start symbol S is in M {\displaystyle M} , the sentence can be generated by the grammar.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following CFG \(\text{S} \rightarrow \text{NP VP PNP}\) \(\text{NP} \rightarrow \text{Det N}\) \(\text{NP} \rightarrow \text{Det Adj N}\) \(\text{VP} \rightarrow \text{V}\) \(\text{VP} \rightarrow \text{Aux Ving}\) \(\text{VP} \rightarrow \text{VP NP}\) \(\text{VP} \rightarrow \text{VP PNP}\) \(\text{PNP} \rightarrow \text{Prep NP}\) and the following lexicon: the:Det, red:Adj, cat:N, is:Aux, meowing:Ving, on:Prep, roof:N The next four questions ask you the content of a given cell of the chart used by the CYK algorithm (used here as a recognizer) for the input sentence the red cat is meowing on the roof Simply answer "empty'' if the corresponding cell is empty and use a comma to separate your answers when the cell contains several objects.What is the content of the cell at row 3 column 6 (indexed as in the lectures)?
Input: Received word y = ( y 0 , … , y 2 n − 1 ) {\displaystyle y=(y_{0},\dots ,y_{2^{n}-1})} For each i ∈ { 1 , … , n } {\displaystyle i\in \{1,\dots ,n\}}: Pick j ∈ { 0 , … , 2 n − 1 } {\displaystyle j\in \{0,\dots ,2^{n}-1\}} uniformly at random. Pick k ∈ { 0 , … , 2 n − 1 } {\displaystyle k\in \{0,\dots ,2^{n}-1\}} such that j + k = e i {\displaystyle j+k=e_{i}} , where e i {\displaystyle e_{i}} is the i {\displaystyle i} -th standard basis vector and j + k {\displaystyle j+k} is the bitwise xor of j {\displaystyle j} and k {\displaystyle k} . x i ← y j + y k {\displaystyle x_{i}\gets y_{j}+y_{k}} .Output: Message x = ( x 1 , … , x n ) {\displaystyle x=(x_{1},\dots ,x_{n})}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In an automated email router of a company, we want to make the distinction between three kind of emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a Naive Bayes approach. What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'? We will consider the following three messages: The Dow industrials tumbled 120.54 to 10924.74, hurt by GM's sales forecast and two economic reports. Oil rose to $71.92. BitTorrent Inc. is boosting its network capacity as it prepares to become a centralized hub for legal video content. In May, BitTorrent announced a deal with Warner Brothers to distribute its TV and movie content via the BT platform. It has now lined up IP transit for streaming videos at a few gigabits per second Intel will sell its XScale PXAxxx applications processor and 3G baseband processor businesses to Marvell for $600 million, plus existing liabilities. The deal could make Marvell the top supplier of 3G and later smartphone processors, and enable Intel to focus on its core x86 and wireless LAN chipset businesses, the companies say. Suppose we have collected the following statistics $3^{3}$ about the word frequencies within the corresponding classes, where '0.00...' stands for some very small value: \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & technical & financial & irrelevant & & technical & financial & irrelevan \\ \hline $\$<$ number $>$ & 0.01 & 0.07 & 0.05 & deal & 0.01 & 0.02 & $0.00 \ldots$ \\ \hline Dow & $0.00 \ldots$ & 0.08 & $0.00 \ldots$ & forecast & $0.00 \ldots$ & 0.03 & 0.01 \\ \hline GM & $0.00 \ldots$ & 0.03 & $0.00 \ldots$ & gigabit & 0.03 & $0.00 \ldots$ & $0.00 \ldots$ \\ \hline IP & 0.03 & $0.00 \ldots$ & $0.00 \ldots$ & hub & 0.06 & $0.00 \ldots$ & 0.01 \\ \hline Intel & 0.02 & 0.02 & $0.00 \ldots$ & network & 0.04 & 0.01 & $0.00 \ldots$ \\ \hline business & 0.01 & 0.07 & 0.04 & processor & 0.07 & 0.01 & $0.00 \ldots$ \\ \hline capacity & 0.01 & $0.00 \ldots$ & $0.00 \ldots$ & smartphone & 0.04 & 0.04 & 0.01 \\ \hline chipset & 0.04 & 0.01 & $0.00 \ldots$ & wireless & 0.02 & 0.01 & $0.00 \ldots$ \\ \hline company & 0.01 & 0.04 & 0.05 & sen & re & . & . \\ \hline \end{tabular} \end{center} We now want to specifically focus on the processing of compounds such as 'network capacity' in the second text. Outline how you would build a pre-processor for compound words
Naive Bayes classifiers are a popular statistical technique of e-mail filtering. They typically use bag-of-words features to identify email spam, an approach commonly used in text classification. Naive Bayes classifiers work by correlating the use of tokens (typically words, or sometimes other things), with spam and non-spam e-mails and then using Bayes' theorem to calculate a probability that an email is or is not spam. Naive Bayes spam filtering is a baseline technique for dealing with spam that can tailor itself to the email needs of individual users and give low false positive spam detection rates that are generally acceptable to users. It is one of the oldest ways of doing spam filtering, with roots in the 1990s.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In an automated email router of a company, we want to make the distinction between three kind of emails: technical (about computers), financial, and the rest ('irrelevant'). For this we plan to use a Naive Bayes approach. What is the main assumption made by Naive Bayes classifiers? Why is it 'Naive'? We will consider the following three messages: The Dow industrials tumbled 120.54 to 10924.74, hurt by GM's sales forecast and two economic reports. Oil rose to $71.92. BitTorrent Inc. is boosting its network capacity as it prepares to become a centralized hub for legal video content. In May, BitTorrent announced a deal with Warner Brothers to distribute its TV and movie content via the BT platform. It has now lined up IP transit for streaming videos at a few gigabits per second Intel will sell its XScale PXAxxx applications processor and 3G baseband processor businesses to Marvell for $600 million, plus existing liabilities. The deal could make Marvell the top supplier of 3G and later smartphone processors, and enable Intel to focus on its core x86 and wireless LAN chipset businesses, the companies say. Suppose we have collected the following statistics $3^{3}$ about the word frequencies within the corresponding classes, where '0.00...' stands for some very small value: \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & technical & financial & irrelevant & & technical & financial & irrelevan \\ \hline $\$<$ number $>$ & 0.01 & 0.07 & 0.05 & deal & 0.01 & 0.02 & $0.00 \ldots$ \\ \hline Dow & $0.00 \ldots$ & 0.08 & $0.00 \ldots$ & forecast & $0.00 \ldots$ & 0.03 & 0.01 \\ \hline GM & $0.00 \ldots$ & 0.03 & $0.00 \ldots$ & gigabit & 0.03 & $0.00 \ldots$ & $0.00 \ldots$ \\ \hline IP & 0.03 & $0.00 \ldots$ & $0.00 \ldots$ & hub & 0.06 & $0.00 \ldots$ & 0.01 \\ \hline Intel & 0.02 & 0.02 & $0.00 \ldots$ & network & 0.04 & 0.01 & $0.00 \ldots$ \\ \hline business & 0.01 & 0.07 & 0.04 & processor & 0.07 & 0.01 & $0.00 \ldots$ \\ \hline capacity & 0.01 & $0.00 \ldots$ & $0.00 \ldots$ & smartphone & 0.04 & 0.04 & 0.01 \\ \hline chipset & 0.04 & 0.01 & $0.00 \ldots$ & wireless & 0.02 & 0.01 & $0.00 \ldots$ \\ \hline company & 0.01 & 0.04 & 0.05 & sen & re & . & . \\ \hline \end{tabular} \end{center} We now want to specifically focus on the processing of compounds such as 'network capacity' in the second text. Outline how you would build a pre-processor for compound words
In statistics, naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (naive) independence assumptions between the features (see Bayes classifier). They are among the simplest Bayesian network models, but coupled with kernel density estimation, they can achieve high accuracy levels.Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression,: 718 which takes linear time, rather than by expensive iterative approximation as used for many other types of classifiers. In the statistics literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but naive Bayes is not (necessarily) a Bayesian method.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the (toy) grammar $G$ consisting of the following rules: R1: S --> NP VP R2: NP --> NN R3: NP --> Det NN R4: NN --> N R5: NN --> NN NN R6: NN --> NN PNP R7: PNP --> Prep NP R8: VP --> V R9: VP --> Adv V Precisely define the type of grammar G is corresponding to (for that, consider at least the following aspects: dependency-based vs. constituency-based, position in the Chomsky hierarchy, and CNF). Justify your answer for each of the aspects you will be mentioning.
Chomsky, N. (1959). "On certain formal properties of grammars". Information and Control. 2 (2): 137–167. doi:10.1016/S0019-9958(59)90362-6.Description: This article introduced what is now known as the Chomsky hierarchy, a containment hierarchy of classes of formal grammars that generate formal languages.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the (toy) grammar $G$ consisting of the following rules: R1: S --> NP VP R2: NP --> NN R3: NP --> Det NN R4: NN --> N R5: NN --> NN NN R6: NN --> NN PNP R7: PNP --> Prep NP R8: VP --> V R9: VP --> Adv V Precisely define the type of grammar G is corresponding to (for that, consider at least the following aspects: dependency-based vs. constituency-based, position in the Chomsky hierarchy, and CNF). Justify your answer for each of the aspects you will be mentioning.
Generative grammars can be described and compared with the aid of the Chomsky hierarchy (proposed by Chomsky in the 1950s). This sets out a series of types of formal grammars with increasing expressive power. Among the simplest types are the regular grammars (type 3); Chomsky argues that these are not adequate as models for human language, because of the allowance of the center-embedding of strings within strings, in all natural human languages. At a higher level of complexity are the context-free grammars (type 2).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The objective of this question is to illustrate the use of a lexical semantics resource to compute lexical cohesion. Consider the following toy ontology providing a semantic structuring for a (small) set of nouns: <ontology> <node text='all'> <children> <node text='animate entities'> <children> <node text='human beings'> <children> <node text='man'></node> <node text='woman'></node> <node text='child'></node> </children> </node> <node text='animals'> <children> <node text='cat'></node> <node text='dog'></node> <node text='mouse'></node> </children> </node> </children> </node> <node text='non animate entities'> <children> <node text='abstract entities'> <children> <node text='freedom'></node> <node text='happiness'></node> </children> </node> <node text='concrete entities'> <children> <node text='table'></node> <node text='pen'></node> <node text='mouse'></node> </children> </node> </children> </node> </children> </node> </ontology> Give some examples of NLP tasks for which lexical cohesion might be useful. Explain why.
Recent advances in methods of lexical semantic relatedness - a survey. Natural Language Engineering 19 (4), 411–479, Cambridge University Press Book: S. Harispe, S. Ranwez, S. Janaqi, J. Montmain. 2015. Semantic Similarity from Natural Language and Ontology Analysis, Morgan & Claypool Publishers.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The objective of this question is to illustrate the use of a lexical semantics resource to compute lexical cohesion. Consider the following toy ontology providing a semantic structuring for a (small) set of nouns: <ontology> <node text='all'> <children> <node text='animate entities'> <children> <node text='human beings'> <children> <node text='man'></node> <node text='woman'></node> <node text='child'></node> </children> </node> <node text='animals'> <children> <node text='cat'></node> <node text='dog'></node> <node text='mouse'></node> </children> </node> </children> </node> <node text='non animate entities'> <children> <node text='abstract entities'> <children> <node text='freedom'></node> <node text='happiness'></node> </children> </node> <node text='concrete entities'> <children> <node text='table'></node> <node text='pen'></node> <node text='mouse'></node> </children> </node> </children> </node> </children> </node> </ontology> Give some examples of NLP tasks for which lexical cohesion might be useful. Explain why.
Cohesion is analysed in the context of both lexical and grammatical as well as intonational aspects with reference to lexical chains and, in the speech register, tonality, tonicity, and tone. The lexical aspect focuses on sense relations and lexical repetitions, while the grammatical aspect looks at repetition of meaning shown through reference, substitution and ellipsis, as well as the role of linking adverbials. Systemic functional grammar deals with all of these areas of meaning equally within the grammatical system itself.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Select which statements are true regarding SCFGs.A penalty will be applied for any incorrect answers.
In languages that allow procedural parameters, the scoping rules are usually defined in such a way that procedural parameters are executed in their native scope. More precisely, suppose that the function actf is passed as argument to P, as its procedural parameter f; and f is then called from inside the body of P. While actf is being executed, it sees the environment of its definition.The implementation of these scoping rules is not trivial. By the time that actf is finally executed, the activation records where its environment variables live may be arbitrarily deep in the stack. This is the so-called downwards funarg problem.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Select which statements are true regarding SCFGs.A penalty will be applied for any incorrect answers.
In computer languages it is expected that any truth-valued expression be permitted as the selection condition rather than restricting it to be a simple comparison. In SQL, selections are performed by using WHERE definitions in SELECT, UPDATE, and DELETE statements, but note that the selection condition can result in any of three truth values (true, false and unknown) instead of the usual two. In SQL, general selections are performed by using WHERE definitions with AND, OR, or NOT operands in SELECT, UPDATE, and DELETE statements.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a 3-gram language model. Select all possible ways we can compute the maximum likelihood of the word sequence:"time flies like an arrow"You will get a penalty for wrong ticks.
Let c ( w , w ′ ) {\displaystyle c(w,w')} be the number of occurrences of the word w {\displaystyle w} followed by the word w ′ {\displaystyle w'} in the corpus. The equation for bigram probabilities is as follows: p K N ( w i | w i − 1 ) = max ( c ( w i − 1 , w i ) − δ , 0 ) ∑ w ′ c ( w i − 1 , w ′ ) + λ w i − 1 p K N ( w i ) {\displaystyle p_{KN}(w_{i}|w_{i-1})={\frac {\max(c(w_{i-1},w_{i})-\delta ,0)}{\sum _{w'}c(w_{i-1},w')}}+\lambda _{w_{i-1}}p_{KN}(w_{i})} Where the unigram probability p K N ( w i ) {\displaystyle p_{KN}(w_{i})} depends on how likely it is to see the word w i {\displaystyle w_{i}} in an unfamiliar context, which is estimated as the number of times it appears after any other word divided by the number of distinct pairs of consecutive words in the corpus: p K N ( w i ) = | { w ′: 0 < c ( w ′ , w i ) } | | { ( w ′ , w ″ ): 0 < c ( w ′ , w ″ ) } | {\displaystyle p_{KN}(w_{i})={\frac {|\{w':0
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider a 3-gram language model. Select all possible ways we can compute the maximum likelihood of the word sequence:"time flies like an arrow"You will get a penalty for wrong ticks.
Let c ( w , w ′ ) {\displaystyle c(w,w')} be the number of occurrences of the word w {\displaystyle w} followed by the word w ′ {\displaystyle w'} in the corpus. The equation for bigram probabilities is as follows: p K N ( w i | w i − 1 ) = max ( c ( w i − 1 , w i ) − δ , 0 ) ∑ w ′ c ( w i − 1 , w ′ ) + λ w i − 1 p K N ( w i ) {\displaystyle p_{KN}(w_{i}|w_{i-1})={\frac {\max(c(w_{i-1},w_{i})-\delta ,0)}{\sum _{w'}c(w_{i-1},w')}}+\lambda _{w_{i-1}}p_{KN}(w_{i})} Where the unigram probability p K N ( w i ) {\displaystyle p_{KN}(w_{i})} depends on how likely it is to see the word w i {\displaystyle w_{i}} in an unfamiliar context, which is estimated as the number of times it appears after any other word divided by the number of distinct pairs of consecutive words in the corpus: p K N ( w i ) = | { w ′: 0 < c ( w ′ , w i ) } | | { ( w ′ , w ″ ): 0 < c ( w ′ , w ″ ) } | {\displaystyle p_{KN}(w_{i})={\frac {|\{w':0
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Following are token counts that appear in 3 documents (D1, D2, and D3): D1 – tablet: 7; memory: 5; app: 8; sluggish: 7 D2 – memory: 5; app: 3 D3 – tablet: 3; sluggish: 3 Based on the cosine similarity, which 2 documents are the most similar?
Cosine similarity is a widely used measure to compare the similarity between two pieces of text. It calculates the cosine of the angle between two document vectors in a high-dimensional space. Cosine similarity ranges between -1 and 1, where a value closer to 1 indicates higher similarity, and a value closer to -1 indicates lower similarity. By visualizing two lines originating from the origin and extending to the respective points of interest, and then measuring the angle between these lines, one can determine the similarity between the associated points. Cosine similarity is less affected by document length, so it may be better at producing medoids that are representative of the content of a cluster instead of the length.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Following are token counts that appear in 3 documents (D1, D2, and D3): D1 – tablet: 7; memory: 5; app: 8; sluggish: 7 D2 – memory: 5; app: 3 D3 – tablet: 3; sluggish: 3 Based on the cosine similarity, which 2 documents are the most similar?
Cosine similarity can be seen as a method of normalizing document length during comparison. In the case of information retrieval, the cosine similarity of two documents will range from 0 → 1 {\displaystyle 0\to 1} , since the term frequencies cannot be negative. This remains true when using TF-IDF weights.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Select what is true about the Baum-Welch algorithm.A penalty will be applied for any incorrect answers.
In electrical engineering, statistical computing and bioinformatics, the Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the unknown parameters of a hidden Markov model (HMM). It makes use of the forward-backward algorithm to compute the statistics for the expectation step.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Select what is true about the Baum-Welch algorithm.A penalty will be applied for any incorrect answers.
The Baum–Welch algorithm is often used to estimate the parameters of HMMs in deciphering hidden or noisy information and consequently is often used in cryptanalysis. In data security an observer would like to extract information from a data stream without knowing all the parameters of the transmission. This can involve reverse engineering a channel encoder.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following toy corpus: the cat cut the hat How many different bigrams of characters (including whitespace) do you have in that corpus?
The text identifier itself consists of multiple constituent parts. : 385 Sequences of whitespace are treated as equivalent to a single space. : 381–382
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following toy corpus: the cat cut the hat How many different bigrams of characters (including whitespace) do you have in that corpus?
For example, the following two nine character long strings, FAREMVIEL and FARMVILLE, have 8 matching characters. 'F', 'A' and 'R' are in the same position in both strings.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Your aim is to evaluate a Tweet analysis system, the purpose of which is to detect whether a tweet is offensive. For each Tweet processed, such a system outputs one of the following classes: "hateful", "offensive" and "neutral".To perform your evaluation, you collect a large set of Tweets and have it annotated by two human annotators. This corpus contains 1% of "hateful" and 4% of "offensive" Tweets.What metrics do you think are appropriate to evaluate such a system?(penalty for wrong ticks)
Even though short text strings might be a problem, sentiment analysis within microblogging has shown that Twitter can be seen as a valid online indicator of political sentiment. Tweets' political sentiment demonstrates close correspondence to parties' and politicians' political positions, indicating that the content of Twitter messages plausibly reflects the offline political landscape. Furthermore, sentiment analysis on Twitter has also been shown to capture the public mood behind human reproduction cycles globally, as well as other problems of public-health relevance such as adverse drug reactions.While sentiment analysis has been popular for domains where authors express their opinion rather explicitly ("the movie is awesome"), such as social media and product reviews, only recently robust methods were devised for other domains where sentiment is strongly implicit or indirect. For example, in news articles - mostly due to the expected journalistic objectivity - journalists often describe actions or events rather than directly stating the polarity of a piece of information. Earlier approaches using dictionaries or shallow machine learning features were unable to catch the "meaning between the lines", but recently researchers have proposed a deep learning based approach and dataset that is able to analyze sentiment in news articles.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Your aim is to evaluate a Tweet analysis system, the purpose of which is to detect whether a tweet is offensive. For each Tweet processed, such a system outputs one of the following classes: "hateful", "offensive" and "neutral".To perform your evaluation, you collect a large set of Tweets and have it annotated by two human annotators. This corpus contains 1% of "hateful" and 4% of "offensive" Tweets.What metrics do you think are appropriate to evaluate such a system?(penalty for wrong ticks)
Their work explains in detail an attempt to detect inauthentic texts and identify pernicious problems of inauthentic texts in cyberspace. The site has a means of submitting text that assesses, based on supervised learning, whether a corpus is inauthentic or not. Many users have submitted incorrect types of data and have correspondingly commented on the scores. This application is meant for a specific kind of data; therefore, submitting, say, an email, will not return a meaningful score.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
We aim at tagging English texts with 'Part-of-Speech' (PoS) tags. For this, we consider using the following model (partial picture): ...some picture... Explanation of (some) tags: \begin{center} \begin{tabular}{l|l|l|l} Tag & English expl. & Expl. française & Example(s) \\ \hline JJ & Adjective & adjectif & yellow \\ NN & Noun, Singular & nom commun singulier & cat \\ NNS & Noun, Plural & nom commun pluriel & cats \\ PRP\$ & Possessive Pronoun & pronom possessif & my, one's \\ RB & Adverb & adverbe & never, quickly \\ VBD & Verb, Past Tense & verbe au passé & ate \\ VBN & Verb, Past Participle & participe passé & eaten \\ VBZ & Verb, Present 3P Sing & verbe au présent, 3e pers. sing. & eats \\ WP\$ & Possessive wh- & pronom relatif (poss.) & whose \\ \end{tabular} \end{center} What kind of model (of PoS tagger) is it? What assumption(s) does it rely on?
In corpus linguistics, part-of-speech tagging (POS tagging or PoS tagging or POST), also called grammatical tagging is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition and its context. A simplified form of this is commonly taught to school-age children, in the identification of words as nouns, verbs, adjectives, adverbs, etc. Once performed by hand, POS tagging is now done in the context of computational linguistics, using algorithms which associate discrete terms, as well as hidden parts of speech, by a set of descriptive tags. POS-tagging algorithms fall into two distinctive groups: rule-based and stochastic. E. Brill's tagger, one of the first and most widely used English POS-taggers, employs rule-based algorithms.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
We aim at tagging English texts with 'Part-of-Speech' (PoS) tags. For this, we consider using the following model (partial picture): ...some picture... Explanation of (some) tags: \begin{center} \begin{tabular}{l|l|l|l} Tag & English expl. & Expl. française & Example(s) \\ \hline JJ & Adjective & adjectif & yellow \\ NN & Noun, Singular & nom commun singulier & cat \\ NNS & Noun, Plural & nom commun pluriel & cats \\ PRP\$ & Possessive Pronoun & pronom possessif & my, one's \\ RB & Adverb & adverbe & never, quickly \\ VBD & Verb, Past Tense & verbe au passé & ate \\ VBN & Verb, Past Participle & participe passé & eaten \\ VBZ & Verb, Present 3P Sing & verbe au présent, 3e pers. sing. & eats \\ WP\$ & Possessive wh- & pronom relatif (poss.) & whose \\ \end{tabular} \end{center} What kind of model (of PoS tagger) is it? What assumption(s) does it rely on?
In corpus linguistics, part-of-speech tagging (POS tagging or PoS tagging or POST), also called grammatical tagging is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition and its context. A simplified form of this is commonly taught to school-age children, in the identification of words as nouns, verbs, adjectives, adverbs, etc. Once performed by hand, POS tagging is now done in the context of computational linguistics, using algorithms which associate discrete terms, as well as hidden parts of speech, by a set of descriptive tags. POS-tagging algorithms fall into two distinctive groups: rule-based and stochastic. E. Brill's tagger, one of the first and most widely used English POS-taggers, employs rule-based algorithms.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Using a 4-gram character model, comparing "banana" and "ananas"...
These models compare the letters of words rather than their phonetics. Dunn et al. studied 125 typological characters across 16 Austronesian and 15 Papuan languages. They compared their results to an MP tree and one constructed by traditional analysis. Significant differences were found. Similarly Wichmann and Saunders used 96 characters to study 63 American languages.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Using a 4-gram character model, comparing "banana" and "ananas"...
The results depended on the data set used. It was found that weighting the characters was important, which requires linguistic judgement. Saunders (2005) compared NJ, MP, GA and Neighbor-Net on a combination of lexical and typological data.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A query \(q\) has been submitted to two distinct Information Retrieval engines operating on the same document collection containing 1'000 documents, with 50 documents being truly relevant for \(q\).The following result lists have been produced by the two IR engines, \(S_1\) and \(S_2\) respectively: \(S_1\text{:}\) \(d_1\) \(d_2\text{ (*)}\) \(d_3\text{ (*)}\) \(d_4\) \(d_5\text{ (*)}\) \(S_2\text{:}\) \(d^\prime_1\text{ (*)}\) \(d^\prime_2\text{ (*)}\) \(d^\prime_3\) \(d^\prime_4\) \(d^\prime_5\) In these result lists, the stars \(\text{(*)}\) identify the truly relevant documents. By convention, we consider that any non retrieved document has been retrieved at rank 6.If Average Precision is used as evaluation metric, which of the two IR engines is performing better for the query \(q\)?
The mathematics of universal IR evaluation is a fairly new subject since the relevance metrics P,R,F,M were not analyzed collectively until recently (within the past decade). A lot of the theoretical groundwork has already been formulated, but new insights in this area await discovery. For a detailed mathematical analysis, a query in the ScienceDirect database for "universal IR evaluation" retrieves several relevant peer-reviewed papers.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A query \(q\) has been submitted to two distinct Information Retrieval engines operating on the same document collection containing 1'000 documents, with 50 documents being truly relevant for \(q\).The following result lists have been produced by the two IR engines, \(S_1\) and \(S_2\) respectively: \(S_1\text{:}\) \(d_1\) \(d_2\text{ (*)}\) \(d_3\text{ (*)}\) \(d_4\) \(d_5\text{ (*)}\) \(S_2\text{:}\) \(d^\prime_1\text{ (*)}\) \(d^\prime_2\text{ (*)}\) \(d^\prime_3\) \(d^\prime_4\) \(d^\prime_5\) In these result lists, the stars \(\text{(*)}\) identify the truly relevant documents. By convention, we consider that any non retrieved document has been retrieved at rank 6.If Average Precision is used as evaluation metric, which of the two IR engines is performing better for the query \(q\)?
For modern (web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them. Precision at k documents (P@k) is still a useful metric (e.g., P@10 or "Precision at 10" corresponds to the number of relevant results among the top 10 retrieved documents), but fails to take into account the positions of the relevant documents among the top k. Another shortcoming is that on a query with fewer relevant results than k, even a perfect system will have a score less than 1. It is easier to score manually since only the top k results need to be examined to determine if they are relevant or not.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
For this question, one or more assertions can be correct. Tick only the correct assertion(s). There will be a penalty for wrong assertions ticked.Using a 3-gram character model, which of the following expressions are equal to \( P(\text{opossum}) \) ?
/^. *?px/ will match the substring 165px in 165px 17px instead of matching 165px 17px. In certain implementations of the BASIC programming language, the ?
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
For this question, one or more assertions can be correct. Tick only the correct assertion(s). There will be a penalty for wrong assertions ticked.Using a 3-gram character model, which of the following expressions are equal to \( P(\text{opossum}) \) ?
Similar to 1.03, 1.16 and 1.17. A very long demonstration was required here.) ✸2.16 (p → q) → (~q → ~p) (If it's true that "If this rose is red then this pig flies" then it's true that "If this pig doesn't fly then this rose isn't red.")
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Using the same set of transformations as in the previous question, what is the final value you get for the edit distance between execution and exceuton, i.e. D(execution, exceuton)?Give your answer as a numerical value. 
To execute O2 after O1, O2 must be transformed against O1 to become: O2' = Delete, whose positional parameter is incremented by one due to the insertion of one character "x" by O1. Executing O2' on "xabc" deletes the correct character "c" and the document becomes "xab". However, if O2 is executed without transformation, it incorrectly deletes character "b" rather than "c". The basic idea of OT is to transform (or adjust) the parameters of an editing operation according to the effects of previously executed concurrent operations so that the transformed operation can achieve the correct effect and maintain document consistency.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Using the same set of transformations as in the previous question, what is the final value you get for the edit distance between execution and exceuton, i.e. D(execution, exceuton)?Give your answer as a numerical value. 
For the task of correcting OCR output, merge and split operations have been used which replace a single character into a pair of them or vice versa.Other variants of edit distance are obtained by restricting the set of operations. Longest common subsequence (LCS) distance is edit distance with insertion and deletion as the only two edit operations, both at unit cost. : 37 Similarly, by only allowing substitutions (again at unit cost), Hamming distance is obtained; this must be restricted to equal-length strings.Jaro–Winkler distance can be obtained from an edit distance where only transpositions are allowed.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Only \( G \) different 4-grams (values) are indeed observed. What is the probability of the others:using “additive smoothing” with a Dirichlet prior with parameter \( (\alpha, \cdots, \alpha) \), of appropriate dimension, where \( \alpha \) is a real-number between 0 and 1?
Additive smoothing is a type of shrinkage estimator, as the resulting estimate will be between the empirical probability (relative frequency) x i / N {\textstyle \textstyle {x_{i}/N}} , and the uniform probability 1 / d {\textstyle \textstyle {1/d}} . Invoking Laplace's rule of succession, some authors have argued that α should be 1 (in which case the term add-one smoothing is also used), though in practice a smaller value is typically chosen. From a Bayesian point of view, this corresponds to the expected value of the posterior distribution, using a symmetric Dirichlet distribution with parameter α as a prior distribution. In the special case where the number of categories is 2, this is equivalent to using a beta distribution as the conjugate prior for the parameters of the binomial distribution.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Only \( G \) different 4-grams (values) are indeed observed. What is the probability of the others:using “additive smoothing” with a Dirichlet prior with parameter \( (\alpha, \cdots, \alpha) \), of appropriate dimension, where \( \alpha \) is a real-number between 0 and 1?
Smoothing bias complicates interval estimation for these models, and the simplest approach turns out to involve a Bayesian approach. Understanding this Bayesian view of smoothing also helps to understand the REML and full Bayes approaches to smoothing parameter estimation. At some level smoothing penalties are imposed because we believe smooth functions to be more probable than wiggly ones, and if that is true then we might as well formalize this notion by placing a prior on model wiggliness. A very simple prior might be π ( β ) ∝ exp ⁡ { − β T ∑ j λ j S j β / ( 2 ϕ ) } {\displaystyle \pi (\beta )\propto \exp\{-\beta ^{T}\sum _{j}\lambda _{j}S_{j}\beta /(2\phi )\}} (where ϕ {\displaystyle \phi } is the GLM scale parameter introduced only for later convenience), but we can immediately recognize this as a multivariate normal prior with mean 0 {\displaystyle 0} and precision matrix S λ = ∑ j λ j S j / ϕ {\displaystyle S_{\lambda }=\sum _{j}\lambda _{j}S_{j}/\phi } .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A major specificity of natural languages is that they are inherently implicit and ambiguous. How should this be taken into account in the NLP perspective? (penalty for wrong ticks)
A characteristic of natural language is that there are many different ways to state what one wants to say: several meanings can be contained in a single text and the same meaning can be expressed by different texts. This variability of semantic expression can be seen as the dual problem of language ambiguity. Together, they result in a many-to-many mapping between language expressions and meanings.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A major specificity of natural languages is that they are inherently implicit and ambiguous. How should this be taken into account in the NLP perspective? (penalty for wrong ticks)
A characteristic of natural language is that there are many different ways to state what one wants to say: several meanings can be contained in a single text and the same meaning can be expressed by different texts. This variability of semantic expression can be seen as the dual problem of language ambiguity. Together, they result in a many-to-many mapping between language expressions and meanings.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
We aim at tagging English texts with 'Part-of-Speech' (PoS) tags. For this, we consider using the following model (partial picture): ...some picture... Explanation of (some) tags: \begin{center} \begin{tabular}{l|l|l|l} Tag & English expl. & Expl. française & Example(s) \\ \hline JJ & Adjective & adjectif & yellow \\ NN & Noun, Singular & nom commun singulier & cat \\ NNS & Noun, Plural & nom commun pluriel & cats \\ PRP\$ & Possessive Pronoun & pronom possessif & my, one's \\ RB & Adverb & adverbe & never, quickly \\ VBD & Verb, Past Tense & verbe au passé & ate \\ VBN & Verb, Past Participle & participe passé & eaten \\ VBZ & Verb, Present 3P Sing & verbe au présent, 3e pers. sing. & eats \\ WP\$ & Possessive wh- & pronom relatif (poss.) & whose \\ \end{tabular} \end{center} We use the following (part of) lexicon: \begin{center} \begin{tabular}{l|ll|l} adult & JJ & has & VBZ \\ adult & $\mathrm{NN}$ & just & RB \\ daughter & $\mathrm{NN}$ & my & PRP\$ \\ developed & VBD & programs & NNS \\ developed & VBN & programs & VBZ \\ first & $\mathrm{JJ}$ & tooth & $\mathrm{NN}$ \\ first & $\mathrm{RB}$ & whose & WP\$ \\ \end{tabular} \end{center} and consider the following sentence: my daughter whose first adult tooth has just developed programs What (formal) parameters make the difference in the choice of these different PoS taggings (for the above model)? Give the explicit mathematical formulas of these parts that are different.
In corpus linguistics, part-of-speech tagging (POS tagging or PoS tagging or POST), also called grammatical tagging is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition and its context. A simplified form of this is commonly taught to school-age children, in the identification of words as nouns, verbs, adjectives, adverbs, etc. Once performed by hand, POS tagging is now done in the context of computational linguistics, using algorithms which associate discrete terms, as well as hidden parts of speech, by a set of descriptive tags. POS-tagging algorithms fall into two distinctive groups: rule-based and stochastic. E. Brill's tagger, one of the first and most widely used English POS-taggers, employs rule-based algorithms.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
We aim at tagging English texts with 'Part-of-Speech' (PoS) tags. For this, we consider using the following model (partial picture): ...some picture... Explanation of (some) tags: \begin{center} \begin{tabular}{l|l|l|l} Tag & English expl. & Expl. française & Example(s) \\ \hline JJ & Adjective & adjectif & yellow \\ NN & Noun, Singular & nom commun singulier & cat \\ NNS & Noun, Plural & nom commun pluriel & cats \\ PRP\$ & Possessive Pronoun & pronom possessif & my, one's \\ RB & Adverb & adverbe & never, quickly \\ VBD & Verb, Past Tense & verbe au passé & ate \\ VBN & Verb, Past Participle & participe passé & eaten \\ VBZ & Verb, Present 3P Sing & verbe au présent, 3e pers. sing. & eats \\ WP\$ & Possessive wh- & pronom relatif (poss.) & whose \\ \end{tabular} \end{center} We use the following (part of) lexicon: \begin{center} \begin{tabular}{l|ll|l} adult & JJ & has & VBZ \\ adult & $\mathrm{NN}$ & just & RB \\ daughter & $\mathrm{NN}$ & my & PRP\$ \\ developed & VBD & programs & NNS \\ developed & VBN & programs & VBZ \\ first & $\mathrm{JJ}$ & tooth & $\mathrm{NN}$ \\ first & $\mathrm{RB}$ & whose & WP\$ \\ \end{tabular} \end{center} and consider the following sentence: my daughter whose first adult tooth has just developed programs What (formal) parameters make the difference in the choice of these different PoS taggings (for the above model)? Give the explicit mathematical formulas of these parts that are different.
In corpus linguistics, part-of-speech tagging (POS tagging or PoS tagging or POST), also called grammatical tagging is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition and its context. A simplified form of this is commonly taught to school-age children, in the identification of words as nouns, verbs, adjectives, adverbs, etc. Once performed by hand, POS tagging is now done in the context of computational linguistics, using algorithms which associate discrete terms, as well as hidden parts of speech, by a set of descriptive tags. POS-tagging algorithms fall into two distinctive groups: rule-based and stochastic. E. Brill's tagger, one of the first and most widely used English POS-taggers, employs rule-based algorithms.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let's denote by respectively \(A\), \(B\) and \(C\) the value stored by the Viterbi algorithm in the node associated to respectively N, V and Adj for the word "time".If \(C > B > A\) and \(10 A \geq 9 C\), what would be the tag of "time" in the most probable tagging, if the tag of "control" is N (in the most probable tagging)?
This version of the halting problem is among the simplest, most-easily described undecidable decision problems: Given an arbitrary positive integer n and a list of n+1 arbitrary words P1,P2,...,Pn,Q on the alphabet {1,2,...,n}, does repeated application of the tag operation t: ijX → XPi eventually convert Q into a word of length less than 2? That is, does the sequence Q, t1(Q), t2(Q), t3(Q), ... terminate?
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let's denote by respectively \(A\), \(B\) and \(C\) the value stored by the Viterbi algorithm in the node associated to respectively N, V and Adj for the word "time".If \(C > B > A\) and \(10 A \geq 9 C\), what would be the tag of "time" in the most probable tagging, if the tag of "control" is N (in the most probable tagging)?
A statistical tagger looks for the most probable tag for an ambiguously tagged text σ σ … σ {\displaystyle \sigma \sigma \ldots \sigma }: γ ∗ … γ ∗ = arg m a x γ ∈ T ( σ ) ⁡ p ( γ … γ σ … σ ) {\displaystyle \gamma ^{*}\ldots \gamma ^{*}=\operatorname {\arg \,max} _{\gamma \in T(\sigma )}p(\gamma \ldots \gamma \sigma \ldots \sigma )} Using Bayes formula, this is converted into: γ ∗ … γ ∗ = arg m a x γ ∈ T ( σ ) ⁡ p ( γ … γ ) p ( σ … σ γ … γ ) {\displaystyle \gamma ^{*}\ldots \gamma ^{*}=\operatorname {\arg \,max} _{\gamma \in T(\sigma )}p(\gamma \ldots \gamma )p(\sigma \ldots \sigma \gamma \ldots \gamma )} where p ( γ γ … γ ) {\displaystyle p(\gamma \gamma \ldots \gamma )} is the probability that a particular tag (syntactic probability) and p ( σ … σ γ … γ ) {\displaystyle p(\sigma \dots \sigma \gamma \ldots \gamma )} is the probability that this tag corresponds to the text σ … σ {\displaystyle \sigma \ldots \sigma } (lexical probability). In a Markov model, these probabilities are approximated as products. The syntactic probabilities are modelled by a first order Markov process: p ( γ γ … γ ) = ∏ t = 1 t = L p ( γ γ ) {\displaystyle p(\gamma \gamma \ldots \gamma )=\prod _{t=1}^{t=L}p(\gamma \gamma )} where γ {\displaystyle \gamma } and γ {\displaystyle \gamma } are delimiter symbols.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
How is it possible to compute the average Precision/Recall curves? Explain in detail the various steps of the computation.
Precision and recall are single-value metrics based on the whole list of documents returned by the system. For systems that return a ranked sequence of documents, it is desirable to also consider the order in which the returned documents are presented. By computing a precision and recall at every position in the ranked sequence of documents, one can plot a precision-recall curve, plotting precision p ( r ) {\displaystyle p(r)} as a function of recall r {\displaystyle r} . Average precision computes the average value of p ( r ) {\displaystyle p(r)} over the interval from r = 0 {\displaystyle r=0} to r = 1 {\displaystyle r=1}: AveP = ∫ 0 1 p ( r ) d r {\displaystyle \operatorname {AveP} =\int _{0}^{1}p(r)dr} That is the area under the precision-recall curve.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
How is it possible to compute the average Precision/Recall curves? Explain in detail the various steps of the computation.
Precision and recall are single-value metrics based on the whole list of documents returned by the system. For systems that return a ranked sequence of documents, it is desirable to also consider the order in which the returned documents are presented. By computing a precision and recall at every position in the ranked sequence of documents, one can plot a precision-recall curve, plotting precision p ( r ) {\displaystyle p(r)} as a function of recall r {\displaystyle r} . Average precision computes the average value of p ( r ) {\displaystyle p(r)} over the interval from r = 0 {\displaystyle r=0} to r = 1 {\displaystyle r=1}: AveP = ∫ 0 1 p ( r ) d r {\displaystyle \operatorname {AveP} =\int _{0}^{1}p(r)dr} That is the area under the precision-recall curve.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the (toy) grammar $G$ consisting of the following rules: R1: S --> NP VP R2: NP --> NN R3: NP --> Det NN R4: NN --> N R5: NN --> NN NN R6: NN --> NN PNP R7: PNP --> Prep NP R8: VP --> V R9: VP --> Adv V What type of rules does the provided grammar $G$ consist of? What type of rules should $G$ be complemented with to be exploitable in practice? What is the format of these missing rules?
Context-free grammars are represented as a set of rules inspired from attempts to model natural languages. The rules are absolute and have a typical syntax representation known as Backus–Naur form. The production rules consist of terminal { a , b } {\displaystyle \left\{a,b\right\}} and non-terminal S symbols and a blank ϵ {\displaystyle \epsilon } may also be used as an end point. In the production rules of CFG and PCFG the left side has only one nonterminal whereas the right side can be any string of terminal or nonterminals.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the (toy) grammar $G$ consisting of the following rules: R1: S --> NP VP R2: NP --> NN R3: NP --> Det NN R4: NN --> N R5: NN --> NN NN R6: NN --> NN PNP R7: PNP --> Prep NP R8: VP --> V R9: VP --> Adv V What type of rules does the provided grammar $G$ consist of? What type of rules should $G$ be complemented with to be exploitable in practice? What is the format of these missing rules?
An intermediate class of grammars known as conjunctive grammars allows conjunction and disjunction, but not negation. The rules of a Boolean grammar are of the form A → α 1 & … & α m & ¬ β 1 & … & ¬ β n {\displaystyle A\to \alpha _{1}\And \ldots \And \alpha _{m}\And \lnot \beta _{1}\And \ldots \And \lnot \beta _{n}} where A {\displaystyle A} is a nonterminal, m + n ≥ 1 {\displaystyle m+n\geq 1} and α 1 {\displaystyle \alpha _{1}} , ..., α m {\displaystyle \alpha _{m}} , β 1 {\displaystyle \beta _{1}} , ..., β n {\displaystyle \beta _{n}} are strings formed of symbols in Σ {\displaystyle \Sigma } and N {\displaystyle N} . Informally, such a rule asserts that every string w {\displaystyle w} over Σ {\displaystyle \Sigma } that satisfies each of the syntactical conditions represented by α 1 {\displaystyle \alpha _{1}} , ..., α m {\displaystyle \alpha _{m}} and none of the syntactical conditions represented by β 1 {\displaystyle \beta _{1}} , ..., β n {\displaystyle \beta _{n}} therefore satisfies the condition defined by A {\displaystyle A} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider 3 regular expressions \(A\), \(B\), and \(C\), such that:the sets of strings recognized by each of the regular expressions is non empty;the set of strings recognized by \(B\) is included in the set of strings recognized by \(A\);some strings are recognized simultaneously by \(A\) and by \(C\); andno string is recognized simultaneously by \(B\) and \(C\).Which of the following statements are true?(where, for a regular expression \(X\),  \((X)\) denotes the transducer which associates every string recognized by \(X\) to itself)(Penalty for wrong ticks)
matches any character. For example, a.b matches any string that contains an "a", and then any character and then "b". a. *b matches any string that contains an "a", and then the character "b" at some later point.These constructions can be combined to form arbitrarily complex expressions, much like one can construct arithmetical expressions from numbers and the operations +, −, ×, and ÷. The precise syntax for regular expressions varies among tools and with context; more detail is given in § Syntax.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider 3 regular expressions \(A\), \(B\), and \(C\), such that:the sets of strings recognized by each of the regular expressions is non empty;the set of strings recognized by \(B\) is included in the set of strings recognized by \(A\);some strings are recognized simultaneously by \(A\) and by \(C\); andno string is recognized simultaneously by \(B\) and \(C\).Which of the following statements are true?(where, for a regular expression \(X\),  \((X)\) denotes the transducer which associates every string recognized by \(X\) to itself)(Penalty for wrong ticks)
Regular expressions consist of constants, which denote sets of strings, and operator symbols, which denote operations over these sets. The following definition is standard, and found as such in most textbooks on formal language theory. Given a finite alphabet Σ, the following constants are defined as regular expressions: (empty set) ∅ denoting the set ∅. (empty string) ε denoting the set containing only the "empty" string, which has no characters at all.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Assume that the texts to be tagged contain 1.5% of unknown words and that the performance of the tagger to be used is 98% on known words. What will be its typical overall performance in the following situation: all unknown words are systematically wrongly tagged?
However, algorithms used for one do not tend to work well for the other, mainly because the part of speech of a word is primarily determined by the immediately adjacent one to three words, whereas the sense of a word may be determined by words further away. The success rate for part-of-speech tagging algorithms is at present much higher than that for WSD, state-of-the art being around 96% accuracy or better, as compared to less than 75% accuracy in word sense disambiguation with supervised learning. These figures are typical for English, and may be very different from those for other languages.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Assume that the texts to be tagged contain 1.5% of unknown words and that the performance of the tagger to be used is 98% on known words. What will be its typical overall performance in the following situation: all unknown words are systematically wrongly tagged?
However, many significant taggers are not included (perhaps because of the labor involved in reconfiguring them for this particular dataset). Thus, it should not be assumed that the results reported here are the best that can be achieved with a given approach; nor even the best that have been achieved with a given approach. In 2014, a paper reporting using the structure regularization method for part-of-speech tagging, achieving 97.36% on a standard benchmark dataset.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
For each of the sub-questions of this question (next page), tick/check the corresponding box if the presented sentence is correct at the corresponding level (for a human). There will be a penalty for wrong boxes ticked/checked.The duke were also presented with a book commemorated his visit’s mother.
Second, if the wrong answers were blind guesses, there would be no information to be found among these answers. On the other hand, if wrong answers reflect interpretation departures from the expected one, these answers should show an ordered relationship to whatever the overall test is measuring. This departure should be dependent upon the level of psycholinguistic maturity of the student choosing or giving the answer in the vernacular in which the test is written.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
For each of the sub-questions of this question (next page), tick/check the corresponding box if the presented sentence is correct at the corresponding level (for a human). There will be a penalty for wrong boxes ticked/checked.The duke were also presented with a book commemorated his visit’s mother.
The second sentence is an echo question; it would be uttered only after receiving an unsatisfactory or confusing answer to a question. One could replace the word wen (which indicates that this sentence is a question) with an identifier such as Mark: 'Kate liebt Mark?' .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
For each of the sub-questions of this question (next page), tick/check the corresponding box if the presented sentence is correct at the corresponding level (for a human). There will be a penalty for wrong boxes ticked/checked.The mouse lost a feather as it took off.
Failing both of the above, capture a box that touches at least one other box held by any player.Any time a contestant answers a question incorrectly, other than on the first question or any puzzle, that player is locked out from answering for two questions (originally three). If a question was answered incorrectly, play ends on that question, and a new question is asked; on a puzzle, play continues until someone answers correctly. (Players who are locked out must stand up from their chair and step away from their buzzers.)
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
For each of the sub-questions of this question (next page), tick/check the corresponding box if the presented sentence is correct at the corresponding level (for a human). There will be a penalty for wrong boxes ticked/checked.The mouse lost a feather as it took off.
Also, scrolling text too fast can make it unreadable to some people, particularly those with visual impairments. This can easily frustrate users. To combat this, client-side scripting allows marquees to be programmed to stop when the mouse is over them.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following document: D = 'the exports from Switzerland to the USA are increasing in 2006' Propose a possible indexing set for this document. Justify your answer.
As it happens, ημν = ημν. This is referred to as raising an index.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following document: D = 'the exports from Switzerland to the USA are increasing in 2006' Propose a possible indexing set for this document. Justify your answer.
More than 3,000 academic papers used data from the index. The effect of improving regulations on economic growth is claimed to be very strong. Moving from the worst one-fourth of nations to the best one-fourth implies a 2.3 percentage point increase in annual growth.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following grammar: S -> NP VP NP -> Det N VP -> VBe Adj NP -> NP PP VP -> V N -> Adj N VP -> VP PP Adj -> Adj PP V -> VBe Adj -> Ving PP -> Prep NP and the following lexicon: at:Prep is:VBe old:Adj black:Adj looking:Ving the:Det cat:N mouse:N under:Prep former:Adj nice:Adj with:Prep This grammar also accepts the following examples, which are (either syntactically or semantically) incorrect in English: the cat is old at the mouse the cat is nice under the mouse the cat is nice at the mouse at the mouse In the first example, attaching 'at the mouse' to 'old' is incorrect in English because some adjectives (e.g. 'old') may not have a PP; the second example is incorrect because 'nice' can only take PPs where the preposition is limited to a certain subset (e.g. 'at', but not 'under'); and the third example is incorrect because adjectives may not combine with more than one PP. Propose modifications to the grammar in order to prevent these types of over-generation.
Like with all other types of phrases, theories of syntax render the syntactic structure of adpositional phrases using trees. The trees that follow represent adpositional phrases according to two modern conventions for rendering sentence structure, first in terms of the constituency relation of phrase structure grammars and then in terms of the dependency relation of dependency grammars. The following labels are used on the nodes in the trees: Adv = adverb, N = nominal (noun or pronoun), P = preposition/postposition, and PP = pre/postpositional phrase: These phrases are identified as prepositional phrases by the placement of PP at the top of the constituency trees and of P at the top of the dependency trees. English also has a number of two-part prepositional phrases, i.e. phrases that can be viewed as containing two prepositions, e.g. Assuming that ago in English is indeed a postposition as suggested above, a typical ago-phrase would receive the following structural analyses: The analysis of circumpositional phrases is not so clear, since it is not obvious which of the two adpositions should be viewed as the head of the phrase. However, the following analyses are more in line with the fact that English is primarily a head-initial language:
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following grammar: S -> NP VP NP -> Det N VP -> VBe Adj NP -> NP PP VP -> V N -> Adj N VP -> VP PP Adj -> Adj PP V -> VBe Adj -> Ving PP -> Prep NP and the following lexicon: at:Prep is:VBe old:Adj black:Adj looking:Ving the:Det cat:N mouse:N under:Prep former:Adj nice:Adj with:Prep This grammar also accepts the following examples, which are (either syntactically or semantically) incorrect in English: the cat is old at the mouse the cat is nice under the mouse the cat is nice at the mouse at the mouse In the first example, attaching 'at the mouse' to 'old' is incorrect in English because some adjectives (e.g. 'old') may not have a PP; the second example is incorrect because 'nice' can only take PPs where the preposition is limited to a certain subset (e.g. 'at', but not 'under'); and the third example is incorrect because adjectives may not combine with more than one PP. Propose modifications to the grammar in order to prevent these types of over-generation.
Like with all other types of phrases, theories of syntax render the syntactic structure of adpositional phrases using trees. The trees that follow represent adpositional phrases according to two modern conventions for rendering sentence structure, first in terms of the constituency relation of phrase structure grammars and then in terms of the dependency relation of dependency grammars. The following labels are used on the nodes in the trees: Adv = adverb, N = nominal (noun or pronoun), P = preposition/postposition, and PP = pre/postpositional phrase: These phrases are identified as prepositional phrases by the placement of PP at the top of the constituency trees and of P at the top of the dependency trees. English also has a number of two-part prepositional phrases, i.e. phrases that can be viewed as containing two prepositions, e.g. Assuming that ago in English is indeed a postposition as suggested above, a typical ago-phrase would receive the following structural analyses: The analysis of circumpositional phrases is not so clear, since it is not obvious which of the two adpositions should be viewed as the head of the phrase. However, the following analyses are more in line with the fact that English is primarily a head-initial language:
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Select what statements are true about probabilistic parsing.A penalty will be applied for any wrong answers selected.
(The probability associated with a grammar rule may be induced, but the application of that grammar rule within a parse tree and the computation of the probability of the parse tree based on its component rules is a form of deduction.) Using this concept, statistical parsers make use of a procedure to search over a space of all candidate parses, and the computation of each candidate's probability, to derive the most probable parse of a sentence. The Viterbi algorithm is one popular method of searching for the most probable parse.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Select what statements are true about probabilistic parsing.A penalty will be applied for any wrong answers selected.
(The probability associated with a grammar rule may be induced, but the application of that grammar rule within a parse tree and the computation of the probability of the parse tree based on its component rules is a form of deduction.) Using this concept, statistical parsers make use of a procedure to search over a space of all candidate parses, and the computation of each candidate's probability, to derive the most probable parse of a sentence. The Viterbi algorithm is one popular method of searching for the most probable parse.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What are possible morphological analyses of "drinks"?(Penalty for wrong ticks)
"Spatiotemporal variation in a Lyme disease host and vector: black-legged ticks on white-footed mice". Vector-Borne and Zoonotic Diseases. 1 (2): 129–138.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What are possible morphological analyses of "drinks"?(Penalty for wrong ticks)
2014. Bat ticks revisited: Ixodes ariadnae sp. nov.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider two Information Retrieval systems S1 and S2 that produced the following outputs for the 4 reference queries q1, q2, q3, q4: S1: | referential: q1: d01 d02 d03 d04 dXX dXX dXX dXX | q1: d01 d02 d03 d04 q2: d06 dXX dXX dXX dXX | q2: d05 d06 q3: dXX d07 d09 d11 dXX dXX dXX dXX dXX | q3: d07 d08 d09 d10 d11 q4: d12 dXX dXX d14 d15 dXX dXX dXX dXX | q4: d12 d13 d14 d15 S2:: | referential: q1: dXX dXX dXX dXX d04 | q1: d01 d02 d03 d04 q2: dXX dXX d05 d06 | q2: d05 d06 q3: dXX dXX d07 d08 d09 | q3: d07 d08 d09 d10 d11 q4: dXX d13 dXX d15 | q4: d12 d13 d14 d15 where dXX refer to document references that do not appear in the referential. To make the answer easier, we copied the referential on the right. For each of the two systems, compute the mean Precision and Recall measures (provide the results as fractions). Explain all the steps of your computation.
Written as a formula: r e l e v a n t _ r e t r i e v e d _ i n s t a n c e s a l l _ r e l e v a n t _ i n s t a n c e s {\displaystyle {\frac {relevant\_retrieved\_instances}{all\_{\mathbf {relevant}}\_instances}}} . Both precision and recall are therefore based on relevance.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider two Information Retrieval systems S1 and S2 that produced the following outputs for the 4 reference queries q1, q2, q3, q4: S1: | referential: q1: d01 d02 d03 d04 dXX dXX dXX dXX | q1: d01 d02 d03 d04 q2: d06 dXX dXX dXX dXX | q2: d05 d06 q3: dXX d07 d09 d11 dXX dXX dXX dXX dXX | q3: d07 d08 d09 d10 d11 q4: d12 dXX dXX d14 d15 dXX dXX dXX dXX | q4: d12 d13 d14 d15 S2:: | referential: q1: dXX dXX dXX dXX d04 | q1: d01 d02 d03 d04 q2: dXX dXX d05 d06 | q2: d05 d06 q3: dXX dXX d07 d08 d09 | q3: d07 d08 d09 d10 d11 q4: dXX d13 dXX d15 | q4: d12 d13 d14 d15 where dXX refer to document references that do not appear in the referential. To make the answer easier, we copied the referential on the right. For each of the two systems, compute the mean Precision and Recall measures (provide the results as fractions). Explain all the steps of your computation.
Information retrieval systems, such as databases and web search engines, are evaluated by many different metrics, some of which are derived from the confusion matrix, which divides results into true positives (documents correctly retrieved), true negatives (documents correctly not retrieved), false positives (documents incorrectly retrieved), and false negatives (documents incorrectly not retrieved). Commonly used metrics include the notions of precision and recall. In this context, precision is defined as the fraction of retrieved documents which are relevant to the query (true positives divided by true+false positives), using a set of ground truth relevant results selected by humans. Recall is defined as the fraction of relevant documents retrieved compared to the total number of relevant documents (true positives divided by true positives+false negatives).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Give some arguments justifying why evaluation is especially important for NLP. In particular, explain the role of evaluation when a corpus-based approach is used.
As in other scientific fields, NLG researchers need to test how well their systems, modules, and algorithms work. This is called evaluation. There are three basic techniques for evaluating NLG systems: Task-based (extrinsic) evaluation: give the generated text to a person, and assess how well it helps them perform a task (or otherwise achieves its communicative goal). For example, a system which generates summaries of medical data can be evaluated by giving these summaries to doctors, and assessing whether the summaries help doctors make better decisions.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Give some arguments justifying why evaluation is especially important for NLP. In particular, explain the role of evaluation when a corpus-based approach is used.
As in other scientific fields, NLG researchers need to test how well their systems, modules, and algorithms work. This is called evaluation. There are three basic techniques for evaluating NLG systems: Task-based (extrinsic) evaluation: give the generated text to a person, and assess how well it helps them perform a task (or otherwise achieves its communicative goal). For example, a system which generates summaries of medical data can be evaluated by giving these summaries to doctors, and assessing whether the summaries help doctors make better decisions.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Provide a formal definition of a transducer. Give some good reasons to use such a tool for morphological processing.
Finite State Transducers (FSTs) are a popular technique for the computational handling of morphology, esp., inflectional morphology. In rule-based morphological parsers, both lexicon and rules are normally formalized as finite state automata and subsequently combined. They thus require morphological dictionaries with specific processing instructions (which often have a linguistic interpretation, but, technically, are just treated like arbitrary string symbols).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Provide a formal definition of a transducer. Give some good reasons to use such a tool for morphological processing.
A transducer is a device that takes energy from one domain as input and converts it to another energy domain as output. They are often reversible, but are rarely used in that way. Transducers have many uses and there are many kinds, in electromechanical systems they can be used as actuators and sensors. In audio electronics they provide the conversion between the electrical and acoustical domains.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider we use the set of transformations: insertion, deletion, substitution, and transposition. We want to compute the edit distance between words execution and exceuton, i.e. D(execution, exceuton).When computing the above, what is the value you get for D(exec,exce)?Give your answer as a numerical value. 
The closeness of a match is measured in terms of the number of primitive operations necessary to convert the string into an exact match. This number is called the edit distance between the string and the pattern. The usual primitive operations are: insertion: cot → coat deletion: coat → cot substitution: coat → costThese three operations may be generalized as forms of substitution by adding a NULL character (here symbolized by *) wherever a character has been deleted or inserted: insertion: co*t → coat deletion: coat → co*t substitution: coat → costSome approximate matchers also treat transposition, in which the positions of two letters in the string are swapped, to be a primitive operation.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider we use the set of transformations: insertion, deletion, substitution, and transposition. We want to compute the edit distance between words execution and exceuton, i.e. D(execution, exceuton).When computing the above, what is the value you get for D(exec,exce)?Give your answer as a numerical value. 
There are other popular measures of edit distance, which are calculated using a different set of allowable edit operations. For instance, the Levenshtein distance allows deletion, insertion and substitution; the Damerau–Levenshtein distance allows insertion, deletion, substitution, and the transposition of two adjacent characters; the longest common subsequence (LCS) distance allows only insertion and deletion, not substitution; the Hamming distance allows only substitution, hence, it only applies to strings of the same length.Edit distance is usually defined as a parameterizable metric calculated with a specific set of allowed edit operations, and each operation is assigned a cost (possibly infinite). This is further generalized by DNA sequence alignment algorithms such as the Smith–Waterman algorithm, which make an operation's cost depend on where it is applied.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are responsible for a project aiming at providing on-line recommendations to the customers of a on-line book selling company. The general idea behind this recommendation system is to cluster books according to both customers and content similarities, so as to propose books similar to the books already bought by a given customer. The core of the recommendation system is a clustering algorithm aiming at regrouping books likely to be appreciate by the same person. This clustering should not only be achieved based on the purchase history of customers, but should also be refined by the content of the books themselves. It's that latter aspect we want to address in this exam question. Consider the following six 'documents' (toy example): d1: 'Because cows are not sorted as they return from the fields to their home pen, cow flows are improved.' d2: 'He was convinced that if he owned the fountain pen that he'd seen in the shop window for years, he could write fantastic stories with it. That was the kind of pen you cannot forget.' d3: 'With this book you will learn how to draw humans, animals (cows, horses, etc.) and flowers with a charcoal pen.' d4: 'The cows were kept in pens behind the farm, hidden from the road. That was the typical kind of pen made for cows.' d5: 'If Dracula wrote with a fountain pen, this would be the kind of pen he would write with, filled with blood red ink. It was the pen she chose for my punishment, the pen of my torment. What a mean cow!' d6: 'What pen for what cow? A red pen for a red cow, a black pen for a black cow, a brown pen for a brown cow, ... Understand?' and suppose (toy example) that they are indexed only by the two words: pen and cow. What is the result of the dendrogram clustering algorithm on those six documents, using the cosine similarity and single linkage? Explain all the steps. Hint: $5 / \sqrt{34}<3 / \sqrt{10}<4 / \sqrt{17}$.
One key choice for researchers when applying unsupervised methods is selecting the number of categories to sort documents into rather than defining what the categories are in advance. Single membership models: these models automatically cluster texts into different categories that are mutually exclusive, and documents are coded into one and only one category. As pointed out by Grimmer and Stewart (16), "each algorithm has three components: (1) a definition of document similarity or distance; (2) an objective function that operationalizes and ideal clustering; and (3) an optimization algorithm."
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are responsible for a project aiming at providing on-line recommendations to the customers of a on-line book selling company. The general idea behind this recommendation system is to cluster books according to both customers and content similarities, so as to propose books similar to the books already bought by a given customer. The core of the recommendation system is a clustering algorithm aiming at regrouping books likely to be appreciate by the same person. This clustering should not only be achieved based on the purchase history of customers, but should also be refined by the content of the books themselves. It's that latter aspect we want to address in this exam question. Consider the following six 'documents' (toy example): d1: 'Because cows are not sorted as they return from the fields to their home pen, cow flows are improved.' d2: 'He was convinced that if he owned the fountain pen that he'd seen in the shop window for years, he could write fantastic stories with it. That was the kind of pen you cannot forget.' d3: 'With this book you will learn how to draw humans, animals (cows, horses, etc.) and flowers with a charcoal pen.' d4: 'The cows were kept in pens behind the farm, hidden from the road. That was the typical kind of pen made for cows.' d5: 'If Dracula wrote with a fountain pen, this would be the kind of pen he would write with, filled with blood red ink. It was the pen she chose for my punishment, the pen of my torment. What a mean cow!' d6: 'What pen for what cow? A red pen for a red cow, a black pen for a black cow, a brown pen for a brown cow, ... Understand?' and suppose (toy example) that they are indexed only by the two words: pen and cow. What is the result of the dendrogram clustering algorithm on those six documents, using the cosine similarity and single linkage? Explain all the steps. Hint: $5 / \sqrt{34}<3 / \sqrt{10}<4 / \sqrt{17}$.
Another is clustering, which analyzes a set of documents by grouping similar or co-occurring documents or terms. Clustering allows the results to be partitioned into groups of related documents. For example, a search for "java" might return clusters for Java (programming language), Java (island), or Java (coffee).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Only \( G \) different 4-grams (values) are indeed observed. What is the probability of the others:If a 4-gram has a probability estimated to be \( p \) with Maximum-Likelihood estimation, what would be its probability if estimated using “additive smoothing” with a Dirichlet prior with parameter \( (\alpha, \cdots, \alpha) \) ?
Additive smoothing is a type of shrinkage estimator, as the resulting estimate will be between the empirical probability (relative frequency) x i / N {\textstyle \textstyle {x_{i}/N}} , and the uniform probability 1 / d {\textstyle \textstyle {1/d}} . Invoking Laplace's rule of succession, some authors have argued that α should be 1 (in which case the term add-one smoothing is also used), though in practice a smaller value is typically chosen. From a Bayesian point of view, this corresponds to the expected value of the posterior distribution, using a symmetric Dirichlet distribution with parameter α as a prior distribution. In the special case where the number of categories is 2, this is equivalent to using a beta distribution as the conjugate prior for the parameters of the binomial distribution.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Only \( G \) different 4-grams (values) are indeed observed. What is the probability of the others:If a 4-gram has a probability estimated to be \( p \) with Maximum-Likelihood estimation, what would be its probability if estimated using “additive smoothing” with a Dirichlet prior with parameter \( (\alpha, \cdots, \alpha) \) ?
Smoothing bias complicates interval estimation for these models, and the simplest approach turns out to involve a Bayesian approach. Understanding this Bayesian view of smoothing also helps to understand the REML and full Bayes approaches to smoothing parameter estimation. At some level smoothing penalties are imposed because we believe smooth functions to be more probable than wiggly ones, and if that is true then we might as well formalize this notion by placing a prior on model wiggliness. A very simple prior might be π ( β ) ∝ exp ⁡ { − β T ∑ j λ j S j β / ( 2 ϕ ) } {\displaystyle \pi (\beta )\propto \exp\{-\beta ^{T}\sum _{j}\lambda _{j}S_{j}\beta /(2\phi )\}} (where ϕ {\displaystyle \phi } is the GLM scale parameter introduced only for later convenience), but we can immediately recognize this as a multivariate normal prior with mean 0 {\displaystyle 0} and precision matrix S λ = ∑ j λ j S j / ϕ {\displaystyle S_{\lambda }=\sum _{j}\lambda _{j}S_{j}/\phi } .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following SCFG with the following probabilities: S → NP VP      0.8 S → NP VP PP      0.2 VP → Vi     {a1} VP → Vt NP     {a2} VP → VP PP         a NP → Det NN     0.3 NP → NP PP     0.7 PP → Prep NP    1.0 Vi → sleeps    1.0Vt → saw   1.0NN → man   {b1}NN → dog      bNN → telescope    {b2}Det → the   1.0Prep → with   0.5Prep → in   0.5What is the value of a? (Give your answer as a numerical value, not as a formula)
{\displaystyle {\begin{aligned}\Pr(G=T,S=T,R=T)&=\Pr(G=T\mid S=T,R=T)\Pr(S=T\mid R=T)\Pr(R=T)\\&=0.99\times 0.01\times 0.2\\&=0.00198.\end{aligned}}} Then the numerical results (subscripted by the associated variable values) are Pr ( R = T ∣ G = T ) = 0.00198 T T T + 0.1584 T F T 0.00198 T T T + 0.288 T T F + 0.1584 T F T + 0.0 T F F = 891 2491 ≈ 35.77 % . {\displaystyle \Pr(R=T\mid G=T)={\frac {0.00198_{TTT}+0.1584_{TFT}}{0.00198_{TTT}+0.288_{TTF}+0.1584_{TFT}+0.0_{TFF}}}={\frac {891}{2491}}\approx 35.77\%.} To answer an interventional question, such as "What is the probability that it would rain, given that we wet the grass?"
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following SCFG with the following probabilities: S → NP VP      0.8 S → NP VP PP      0.2 VP → Vi     {a1} VP → Vt NP     {a2} VP → VP PP         a NP → Det NN     0.3 NP → NP PP     0.7 PP → Prep NP    1.0 Vi → sleeps    1.0Vt → saw   1.0NN → man   {b1}NN → dog      bNN → telescope    {b2}Det → the   1.0Prep → with   0.5Prep → in   0.5What is the value of a? (Give your answer as a numerical value, not as a formula)
One question is whether to treat the range of obtained values for | M 0 ν | {\displaystyle \left|M^{0\nu }\right|} as the theoretical uncertainty and whether this is then to be understood as a statistical uncertainty. Different approaches are being chosen here. The obtained values for | M 0 ν | {\displaystyle \left|M^{0\nu }\right|} often vary by factors of 2 up to about 5.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is the formal relation between accuracy and the error rate? In which case would you recommend to use the one or the other?
A further complication is added by whether a given syntax allows for error correction and, if it does, how easy that process is for the user. There is thus some merit to the argument that performance metrics should be developed to suit the particular system being measured. Whichever metric is used, however, one major theoretical problem in assessing the performance of a system is deciding whether a word has been “mis-pronounced,” i.e. does the fault lie with the user or with the recogniser.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is the formal relation between accuracy and the error rate? In which case would you recommend to use the one or the other?
Accuracy and precision are two measures of observational error. Accuracy is how close a given set of measurements (observations or readings) are to their true value, while precision is how close the measurements are to each other. In other words, precision is a description of random errors, a measure of statistical variability. Accuracy has two definitions: More commonly, it is a description of only systematic errors, a measure of statistical bias of a given measure of central tendency; low accuracy causes a difference between a result and a true value; ISO calls this trueness. Alternatively, ISO defines accuracy as describing a combination of both types of observational error (random and systematic), so high accuracy requires both high precision and high trueness.In the first, more common definition of "accuracy" above, the concept is independent of "precision", so a particular set of data can be said to be accurate, precise, both, or neither. In simpler terms, given a statistical sample or set of data points from repeated measurements of the same quantity, the sample or set can be said to be accurate if their average is close to the true value of the quantity being measured, while the set can be said to be precise if their standard deviation is relatively small.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following CF grammar \(G_1\) \( R_1: \text{S} \rightarrow \text{NP VP} \) \( R_2: \text{S} \rightarrow \text{NP VP PNP} \) \( R_3: \text{PNP} \rightarrow \text{Prep NP} \) \( R_4: \text{NP} \rightarrow \text{N} \) \( R_5: \text{NP} \rightarrow \text{Det N} \) \( R_6: \text{NP} \rightarrow \text{Det N PNP} \) \( R_7: \text{VP} \rightarrow \text{V} \) \( R_8: \text{VP} \rightarrow \text{V NP} \) (where \(\text{Det}\), \(\text{N}\), \(\text{Prep}\) and \(\text{V}\) are the only pre-terminals), complemented by an adequate lexicon \(L_1\).If the sequence \((p_1, p_2, \dots, p_8)\) represents a set of probabilistic coefficients for the syntactic rules in \(G_1\) (\(p_i\) being associated to \(R_i\)), indicate which of the following choices correspond to a valid probabilistic extension for the grammar \(G_1\). (Penalty for wrong ticks.)
A probabilistic context free grammar consists of terminal and nonterminal variables. Each feature to be modeled has a production rule that is assigned a probability estimated from a training set of RNA structures. Production rules are recursively applied until only terminal residues are left. A starting non-terminal S {\displaystyle \mathbf {\mathit {S}} } produces loops.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following CF grammar \(G_1\) \( R_1: \text{S} \rightarrow \text{NP VP} \) \( R_2: \text{S} \rightarrow \text{NP VP PNP} \) \( R_3: \text{PNP} \rightarrow \text{Prep NP} \) \( R_4: \text{NP} \rightarrow \text{N} \) \( R_5: \text{NP} \rightarrow \text{Det N} \) \( R_6: \text{NP} \rightarrow \text{Det N PNP} \) \( R_7: \text{VP} \rightarrow \text{V} \) \( R_8: \text{VP} \rightarrow \text{V NP} \) (where \(\text{Det}\), \(\text{N}\), \(\text{Prep}\) and \(\text{V}\) are the only pre-terminals), complemented by an adequate lexicon \(L_1\).If the sequence \((p_1, p_2, \dots, p_8)\) represents a set of probabilistic coefficients for the syntactic rules in \(G_1\) (\(p_i\) being associated to \(R_i\)), indicate which of the following choices correspond to a valid probabilistic extension for the grammar \(G_1\). (Penalty for wrong ticks.)
Similar to a CFG, a probabilistic context-free grammar G can be defined by a quintuple: G = ( M , T , R , S , P ) {\displaystyle G=(M,T,R,S,P)} where M is the set of non-terminal symbols T is the set of terminal symbols R is the set of production rules S is the start symbol P is the set of probabilities on production rules
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then, whenever your editor pitches you a title for a column topic, you'll just be able to give the title to your story generation system, produce the text body of the column, and publish it to the website! To evaluate your system, you decide to hold out some of the columns you have previously written and use them as an evaluation set. After generating new columns using the same titles as these held-out columns, you decide to evaluate their quality. What would be an advantage of using a model-based metric?
An individual or small business might have this publishing process: Brainstorm content ideas to publish, where to publish, and when to publish Write each piece of content based on the publication schedule Edit each piece of content Publish each piece of contentA larger group might have this publishing process: Brainstorm content ideas to publish, where to publish, and when to publish; include backup content items for each piece of content; include dates to determine whether to delay or kill each content item (for example, if a writer becomes ill or an interview subject is unavailable) Assign each piece of content based on the publication schedule Write each piece of content Review the first draft of each piece of content Give "go" or "no go" decision based on first draft edit and other criteria (then adjust the publishing schedule as needed) If you go, finish writing each piece of content and submit draft content to the layout team, so they can plan their work Perform final edit, copy edit, fact checking, and rewrites as needed Submit piece of content for review by legal team Make changes if or as needed based on legal input Submit piece of content formally to layout team for their creation of artwork to be included with the published content Post content on a development or test server and make final changes if needed Publish content on the production server or other mediaWhether the publishing process is simple or complex, the movement is forward and iterative. Publishers encounter and cross a number of hurdles before a piece of content appears in print, on a website or blog, or in a social media outlet like Twitter or Facebook. The details included and tracked in an editorial calendar depend upon the steps involved in publishing content for a publication, as well as what is useful to track. Too little or too much data make editorial calendars difficult to maintain and use. Some amount of tweaking of editorial calendar elements, while using the calendar to publish content, is required before they can be truly useful.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You have been publishing a daily column for the Gazette over the last few years and have recently reached a milestone --- your 1000th column! Realizing you'd like to go skiing more often, you decide it might be easier to automate your job by training a story generation system on the columns you've already written. Then, whenever your editor pitches you a title for a column topic, you'll just be able to give the title to your story generation system, produce the text body of the column, and publish it to the website! To evaluate your system, you decide to hold out some of the columns you have previously written and use them as an evaluation set. After generating new columns using the same titles as these held-out columns, you decide to evaluate their quality. What would be an advantage of using a model-based metric?
The main drawback of the evaluation systems so far is that we need a reference summary (for some methods, more than one), to compare automatic summaries with models. This is a hard and expensive task. Much effort has to be made to create corpora of texts and their corresponding summaries. Furthermore, some methods require manual annotation of the summaries (e.g. SCU in the Pyramid Method). Moreover, they all perform a quantitative evaluation with regard to different similarity metrics.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Why is natural language processing difficult? Select all that apply.A penalty will be applied for wrong answers.
Missing punctuation and the use of non-standard words can often hinder standard natural language processing tools such as part-of-speech tagging and parsing. Techniques to both learn from the noisy data and then to be able to process the noisy data are only now being developed.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Why is natural language processing difficult? Select all that apply.A penalty will be applied for wrong answers.
Most of the more successful systems use lexical statistics (that is, they consider the identities of the words involved, as well as their part of speech). However such systems are vulnerable to overfitting and require some kind of smoothing to be effective.Parsing algorithms for natural language cannot rely on the grammar having 'nice' properties as with manually designed grammars for programming languages. As mentioned earlier some grammar formalisms are very difficult to parse computationally; in general, even if the desired structure is not context-free, some kind of context-free approximation to the grammar is used to perform a first pass.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following CF grammar \(G_1\) \( R_1: \text{S} \rightarrow \text{NP VP} \) \( R_2: \text{S} \rightarrow \text{NP VP PNP} \) \( R_3: \text{PNP} \rightarrow \text{Prep NP} \) \( R_4: \text{NP} \rightarrow \text{N} \) \( R_5: \text{NP} \rightarrow \text{Det N} \) \( R_6: \text{NP} \rightarrow \text{Det N PNP} \) \( R_7: \text{VP} \rightarrow \text{V} \) \( R_8: \text{VP} \rightarrow \text{V NP} \) (where \(\text{Det}\), \(\text{N}\), \(\text{Prep}\) and \(\text{V}\) are the only pre-terminals), complemented by an adequate lexicon \(L_1\).Assume that the grammar \(G_1\) has been associated with a valid choice of probabilistic coefficients, but then needs to be converted into an equivalent SCFG in extended Chomsky Normal form.Is it possible to derive the stochastic coefficients of the grammar resulting from the conversion from the ones of \(G_1\)?
Similar to a CFG, a probabilistic context-free grammar G can be defined by a quintuple: G = ( M , T , R , S , P ) {\displaystyle G=(M,T,R,S,P)} where M is the set of non-terminal symbols T is the set of terminal symbols R is the set of production rules S is the start symbol P is the set of probabilities on production rules
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following CF grammar \(G_1\) \( R_1: \text{S} \rightarrow \text{NP VP} \) \( R_2: \text{S} \rightarrow \text{NP VP PNP} \) \( R_3: \text{PNP} \rightarrow \text{Prep NP} \) \( R_4: \text{NP} \rightarrow \text{N} \) \( R_5: \text{NP} \rightarrow \text{Det N} \) \( R_6: \text{NP} \rightarrow \text{Det N PNP} \) \( R_7: \text{VP} \rightarrow \text{V} \) \( R_8: \text{VP} \rightarrow \text{V NP} \) (where \(\text{Det}\), \(\text{N}\), \(\text{Prep}\) and \(\text{V}\) are the only pre-terminals), complemented by an adequate lexicon \(L_1\).Assume that the grammar \(G_1\) has been associated with a valid choice of probabilistic coefficients, but then needs to be converted into an equivalent SCFG in extended Chomsky Normal form.Is it possible to derive the stochastic coefficients of the grammar resulting from the conversion from the ones of \(G_1\)?
A probabilistic context free grammar consists of terminal and nonterminal variables. Each feature to be modeled has a production rule that is assigned a probability estimated from a training set of RNA structures. Production rules are recursively applied until only terminal residues are left. A starting non-terminal S {\displaystyle \mathbf {\mathit {S}} } produces loops.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the (toy) grammar $G$ consisting of the following rules: R1: S --> NP VP R2: NP --> NN R3: NP --> Det NN R4: NN --> N R5: NN --> NN NN R6: NN --> NN PNP R7: PNP --> Prep NP R8: VP --> V R9: VP --> Adv V What is the number $N$ of additional rules that should be added to $G$ to make it applicable to any sequence of words from a set of 10000 distinct words with an average syntactic ambiguity of 1.5? Justify your answer.
Thus, a prescribed sequence controlled grammar is at least approximately a 5-tuple G = ( N , T , S , P , R ) {\displaystyle G=(N,T,S,P,R)} where everything except R is the same as in a CFG, and R is an infinite set of valid derivation sequences p 1 p 2 . . .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the (toy) grammar $G$ consisting of the following rules: R1: S --> NP VP R2: NP --> NN R3: NP --> Det NN R4: NN --> N R5: NN --> NN NN R6: NN --> NN PNP R7: PNP --> Prep NP R8: VP --> V R9: VP --> Adv V What is the number $N$ of additional rules that should be added to $G$ to make it applicable to any sequence of words from a set of 10000 distinct words with an average syntactic ambiguity of 1.5? Justify your answer.
The theorem can be used in analytic combinatorics to estimate the number of words of length n generated by a given unambiguous context-free grammar, as n grows large. The following example is given by Gruber, Lee & Shallit (2012): the unambiguous context-free grammar G over the alphabet {0,1} has start symbol S and the following rules S → M | U M → 0M1M | ε U → 0S | 0M1U.To obtain an algebraic representation of the power series G ( x ) {\displaystyle G(x)} associated with a given context-free grammar G, one transforms the grammar into a system of equations. This is achieved by replacing each occurrence of a terminal symbol by x, each occurrence of ε by the integer '1', each occurrence of '→' by '=', and each occurrence of '|' by '+', respectively. The operation of concatenation at the right-hand-side of each rule corresponds to the multiplication operation in the equations thus obtained.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus