question
stringlengths
6
3.53k
text
stringlengths
17
2.05k
source
stringclasses
1 value
Build the inverse document-frequency matrix (idf)
LSA can use a document-term matrix which describes the occurrences of terms in documents; it is a sparse matrix whose rows correspond to terms and whose columns correspond to documents. A typical example of the weighting of the elements of the matrix is tf-idf (term frequency–inverse document frequency): the weight of an element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance. This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Build the inverse document-frequency matrix (idf)
The inverse document frequency is a measure of how much information the word provides, i.e., if it is common or rare across all documents. It is the logarithmically scaled inverse fraction of the documents that contain the word (obtained by dividing the total number of documents by the number of documents containing the term, and then taking the logarithm of that quotient): i d f ( t , D ) = log ⁡ N | { d ∈ D: t ∈ d } | {\displaystyle \mathrm {idf} (t,D)=\log {\frac {N}{|\{d\in D:t\in d\}|}}} with N {\displaystyle N}: total number of documents in the corpus N = | D | {\displaystyle N={|D|}} | { d ∈ D: t ∈ d } | {\displaystyle |\{d\in D:t\in d\}|}: number of documents where the term t {\displaystyle t} appears (i.e., t f ( t , d ) ≠ 0 {\displaystyle \mathrm {tf} (t,d)\neq 0} ). If the term is not in the corpus, this will lead to a division-by-zero. It is therefore common to adjust the nominator 1 + N {\displaystyle 1+N} and denominator to 1 + | { d ∈ D: t ∈ d } | {\displaystyle 1+|\{d\in D:t\in d\}|} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following are part of the RDF schema language?
RDF Schema (Resource Description Framework Schema, variously abbreviated as RDFS, RDF(S), RDF-S, or RDF/S) is a set of classes with certain properties using the RDF extensible knowledge representation data model, providing basic elements for the description of ontologies. It uses various forms of RDF vocabularies, intended to structure RDF resources. RDF and RDFS can be saved in a triplestore, then one can extract some knowledge from them using a query language, like SPARQL. The first version was published by the World-Wide Web Consortium (W3C) in April 1998, and the final W3C recommendation was released in February 2014. Many RDFS components are included in the more expressive Web Ontology Language (OWL).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following are part of the RDF schema language?
An RDF-based model can be represented in a variety of syntaxes, e.g., RDF/XML, N3, Turtle, and RDFa. RDF is a fundamental standard of the Semantic Web. RDF Schema extends RDF and is a vocabulary for describing properties and classes of RDF-based resources, with semantics for generalized-hierarchies of such properties and classes.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Create a function that parses the input documents and creates a dictionary with the terms and term frequencies.
4. Computing term frequencies or tf-idf After pre-processing the text data, we can then proceed to generate features. For document clustering, one of the most common ways to generate features for a document is to calculate the term frequencies of all its tokens.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Create a function that parses the input documents and creates a dictionary with the terms and term frequencies.
The second phase is searching. The user's search query term is parsed into a possible phoneme string using a phonetic dictionary. Then, multiple PAT files can be scanned at high speed during a single search for likely phonetic sequences that closely match corresponding strings of phonemes in the query term.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Dude said “I like bowling”. With how many statements can we express this sentence using ​ RDF Reification?
The conventional use of the RDF reification vocabulary always involves describing a statement using four statements in this pattern. Therefore, they are sometimes referred to as the "reification quad".Using reification according to this convention, we could record the fact that person:p3 added the statement to the database by It is important to note that in the conventional use of reification, the subject of the reification triples is assumed to identify a particular instance of a triple in a particular RDF document, rather than some arbitrary triple having the same subject, predicate, and object. This particular convention is used because reification is intended for expressing properties such as dates of composition and source information, as in the examples given already, and these properties need to be applied to specific instances of triples.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Dude said “I like bowling”. With how many statements can we express this sentence using ​ RDF Reification?
(19)a. Pat was like “I’ll call you.” b. and then my sister’s all “excuse me would you mind if I gave you, if I want your autograph” and she’s like “oh sure, no problem.” c. And he goes “yeah” and looks and you can tell maybe he thinks he's got the wrong address These forms, particularly be like, have captured the attention of much linguistic study and documentation. Some research has addressed the syntax of these forms in quotation, which is highly problematic.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following set of frequent 3-itemsets: {1, 2, 3}, {1, 2, 4}, {1, 2, 5}, {1, 3, 4}, {2, 3, 4}, {2, 3, 5}, {3, 4, 5}. Which one is not a candidate 4-itemset?
The set of possible itemsets is the power set over I and has size 2 n − 1 {\displaystyle 2^{n}-1} , of course this means to exclude the empty set which is not considered to be a valid itemset. However, the size of the power set will grow exponentially in the number of item n that is within the power set I. An efficient search is possible by using the downward-closure property of support (also called anti-monotonicity). This would guarantee that a frequent itemset and all its subsets are also frequent and thus will have no infrequent itemsets as a subset of a frequent itemset. Exploiting this property, efficient algorithms (e.g., Apriori and Eclat) can find all frequent itemsets.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Consider the following set of frequent 3-itemsets: {1, 2, 3}, {1, 2, 4}, {1, 2, 5}, {1, 3, 4}, {2, 3, 4}, {2, 3, 5}, {3, 4, 5}. Which one is not a candidate 4-itemset?
The pairs {1,3} and {1,4} are not. Now, because {1,3} and {1,4} are not frequent, any larger set which contains {1,3} or {1,4} cannot be frequent. In this way, we can prune sets: we will now look for frequent triples in the database, but we can already exclude all the triples that contain one of these two pairs: in the example, there are no frequent triplets. {2,3,4} is below the minimal threshold, and the other triplets were excluded because they were super sets of pairs that were already below the threshold. We have thus determined the frequent sets of items in the database, and illustrated how some items were not counted because one of their subsets was already known to be below the threshold.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is true in the context of inverted files?
One method is superimposed coding. A post-processing step is done to discard the false alarms. Since in most cases this structure is inferior to inverted files in terms of speed, size and functionality, it is not used widely. However, with proper parameters it can beat the inverted files in certain environments.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is true in the context of inverted files?
In an inverted file or inverted index, the contents of the data are used as keys in a lookup table, and the values in the table are pointers to the location of each instance of a given content item. This is also the logical structure of contemporary database indexes, which might only use the contents from a particular columns in the lookup table. The inverted file data model can put indexes in a set of files next to existing flat database files, in order to efficiently directly access needed records in these files. Notable for using this data model is the ADABAS DBMS of Software AG, introduced in 1970.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Regarding Label Propagation, which of the following is false?
A natural assumption in network classification is that adjacent nodes are likely to have the same label (i.e., contagion or homophily). The predictor for node V i {\displaystyle V_{i}} using the label propagation method is a weighted average of its neighboring labels Y N i {\displaystyle Y_{N_{i}}}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Regarding Label Propagation, which of the following is false?
While label propagation is surprisingly effective, it may sometimes fail to capture complex relational dynamics. More sophisticated approaches can use richer predictors. Suppose we have a classifier h {\displaystyle h} that has been trained to classify a node v i {\displaystyle v_{i}} given its features X i {\displaystyle X_{i}} and the features X N i {\displaystyle X_{N_{i}}} and labels Y N i {\displaystyle Y_{N_{i}}} of its neighbors N i {\displaystyle N_{i}} . Iterative classification applies uses a local classifier for each node, which uses information about current predictions and ground truth information about the node's neighbors, and iterates until the local predictions converge to a global solution. Iterative classification is an “algorithmic framework,” in that it is agnostic to the choice of predictor; this makes it a very versatile tool for collective classification.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement the recall at k metric
For modern (web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them. Precision at k documents (P@k) is still a useful metric (e.g., P@10 or "Precision at 10" corresponds to the number of relevant results among the top 10 retrieved documents), but fails to take into account the positions of the relevant documents among the top k. Another shortcoming is that on a query with fewer relevant results than k, even a perfect system will have a score less than 1. It is easier to score manually since only the top k results need to be examined to determine if they are relevant or not.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement the recall at k metric
For modern (web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them. Precision at k documents (P@k) is still a useful metric (e.g., P@10 or "Precision at 10" corresponds to the number of relevant results among the top 10 retrieved documents), but fails to take into account the positions of the relevant documents among the top k. Another shortcoming is that on a query with fewer relevant results than k, even a perfect system will have a score less than 1. It is easier to score manually since only the top k results need to be examined to determine if they are relevant or not.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The type statement in RDF would be expressed in the relational data model by a table
The RDF typed links are fundamental in LOD datasets for identifying the relationship (predicate) type of RDF triples, contributing to the automatic processability of machine-readable statements of the Giant Global Graph on the Semantic Web. The typed links in RDF are expressed as the value of the rdf:type property, defining the relationship type using well-established controlled vocabulary terms or definitions from LOD datasets such as
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The type statement in RDF would be expressed in the relational data model by a table
A database model is a type of data model that determines the logical structure of a database and fundamentally determines in which manner data can be stored, organized, and manipulated. The most popular example of a database model is the relational model (or the SQL approximation of relational), which uses a table-based format. Common logical data models for databases include: Navigational databases Hierarchical database model Network model Graph database Relational model Entity–relationship model Enhanced entity–relationship model Object model Document model Entity–attribute–value model Star schemaAn object–relational database combines the two related structures. Physical data models include: Inverted index Flat fileOther models include: Multidimensional model Array model Multivalue modelSpecialized models are optimized for particular types of data: XML database Semantic model Content store Event store Time series model
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given graph 1→2, 1→3, 2→3, 3→2, switching from Page Rank to Teleporting PageRank will have an influence on the value(s) of:
Through this data, they concluded the algorithm can be scaled very well and that the scaling factor for extremely large networks would be roughly linear in log ⁡ n {\displaystyle \log n} , where n is the size of the network. As a result of Markov theory, it can be shown that the PageRank of a page is the probability of arriving at that page after a large number of clicks. This happens to equal t − 1 {\displaystyle t^{-1}} where t {\displaystyle t} is the expectation of the number of clicks (or random jumps) required to get from the page back to itself.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given graph 1→2, 1→3, 2→3, 3→2, switching from Page Rank to Teleporting PageRank will have an influence on the value(s) of:
As Google increases the number of documents in its collection, the initial approximation of PageRank decreases for all documents. The formula uses a model of a random surfer who reaches their target site after several clicks, then switches to a random page. The PageRank value of a page reflects the chance that the random surfer will land on that page by clicking on a link.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The number of term vactors in the matrix K_s, used for LSI
We note that the multiplicative factors for W and H, i.e. the W T V W T W H {\textstyle {\frac {\mathbf {W} ^{\mathsf {T}}\mathbf {V} }{\mathbf {W} ^{\mathsf {T}}\mathbf {W} \mathbf {H} }}} and V H T W H H T {\textstyle {\textstyle {\frac {\mathbf {V} \mathbf {H} ^{\mathsf {T}}}{\mathbf {W} \mathbf {H} \mathbf {H} ^{\mathsf {T}}}}}} terms, are matrices of ones when V = W H {\displaystyle \mathbf {V} =\mathbf {W} \mathbf {H} } . More recently other algorithms have been developed.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The number of term vactors in the matrix K_s, used for LSI
A rank-reduced, singular value decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI. It computes the term and document vector spaces by approximating the single term-frequency matrix, A {\displaystyle A} , into three other matrices— an m by r term-concept vector matrix T {\displaystyle T} , an r by r singular values matrix S {\displaystyle S} , and a n by r concept-document vector matrix, D {\displaystyle D} , which satisfy the following relations: A ≈ T S D T {\displaystyle A\approx TSD^{T}} T T T = I r D T D = I r {\displaystyle T^{T}T=I_{r}\quad D^{T}D=I_{r}} S 1 , 1 ≥ S 2 , 2 ≥ … ≥ S r , r > 0 S i , j = 0 where i ≠ j {\displaystyle S_{1,1}\geq S_{2,2}\geq \ldots \geq S_{r,r}>0\quad S_{i,j}=0\;{\text{where}}\;i\neq j} In the formula, A is the supplied m by n weighted matrix of term frequencies in a collection of text where m is the number of unique terms, and n is the number of documents. T is a computed m by r matrix of term vectors where r is the rank of A—a measure of its unique dimensions ≤ min(m,n).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is true regarding the random forest classification algorithm?
Finally, the idea of randomized node optimization, where the decision at each node is selected by a randomized procedure, rather than a deterministic optimization was first introduced by Thomas G. Dietterich.The proper introduction of random forests was made in a paper by Leo Breiman. This paper describes a method of building a forest of uncorrelated trees using a CART like procedure, combined with randomized node optimization and bagging. In addition, this paper combines several ingredients, some previously known and some novel, which form the basis of the modern practice of random forests, in particular: Using out-of-bag error as an estimate of the generalization error. Measuring variable importance through permutation.The report also offers the first theoretical result for random forests in the form of a bound on the generalization error which depends on the strength of the trees in the forest and their correlation.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is true regarding the random forest classification algorithm?
Random forests or random decision forests is an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the mean or average prediction of the individual trees is returned. Random decision forests correct for decision trees' habit of overfitting to their training set.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following properties is part of the RDF Schema Language?
RDF Schema (Resource Description Framework Schema, variously abbreviated as RDFS, RDF(S), RDF-S, or RDF/S) is a set of classes with certain properties using the RDF extensible knowledge representation data model, providing basic elements for the description of ontologies. It uses various forms of RDF vocabularies, intended to structure RDF resources. RDF and RDFS can be saved in a triplestore, then one can extract some knowledge from them using a query language, like SPARQL. The first version was published by the World-Wide Web Consortium (W3C) in April 1998, and the final W3C recommendation was released in February 2014. Many RDFS components are included in the more expressive Web Ontology Language (OWL).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following properties is part of the RDF Schema Language?
An RDF-based model can be represented in a variety of syntaxes, e.g., RDF/XML, N3, Turtle, and RDFa. RDF is a fundamental standard of the Semantic Web. RDF Schema extends RDF and is a vocabulary for describing properties and classes of RDF-based resources, with semantics for generalized-hierarchies of such properties and classes.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
If rule {A,B} -> {C} has confidence c1 and rule {A} -> {C} has confidence c2, then
The confidence for Rule 2 is 2/3 because two of the three records that meet the antecedent of B meet the consequent of 1. The confidences can be written as: conf ⁡ ( A ⇒ 0 ) = P ( 0 ∣ A ) {\displaystyle \operatorname {conf} (A\Rightarrow 0)=P(0\mid A)} conf ⁡ ( B ⇒ 1 ) = P ( 1 ∣ B ) {\displaystyle \operatorname {conf} (B\Rightarrow 1)=P(1\mid B)} Lift can be found by dividing the confidence by the unconditional probability of the consequent, or by dividing the support by the probability of the antecedent times the probability of the consequent, so: The lift for Rule 1 is (3/4)/(4/7) = (3*7)/(4 * 4) = 21/16 ≈ 1.31 The lift for Rule 2 is (2/3)/(3/7) = (2*7)/(3 * 3) = 14/9 ≈ 1.56 lift ⁡ ( A ⇒ 0 ) = P ( 0 ∣ A ) P ( 0 ) = P ( A ∧ 0 ) P ( A ) P ( 0 ) {\displaystyle \operatorname {lift} (A\Rightarrow 0)={\frac {P(0\mid A)}{P(0)}}={\frac {P(A\land 0)}{P(A)P(0)}}} lift ⁡ ( B ⇒ 1 ) = P ( 1 ∣ B ) P ( 1 ) = P ( B ∧ 1 ) P ( B ) P ( 1 ) {\displaystyle \operatorname {lift} (B\Rightarrow 1)={\frac {P(1\mid B)}{P(1)}}={\frac {P(B\land 1)}{P(B)P(1)}}} If some rule had a lift of 1, it would imply that the probability of occurrence of the antecedent and that of the consequent are independent of each other. When two events are independent of each other, no rule can be drawn involving those two events.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
If rule {A,B} -> {C} has confidence c1 and rule {A} -> {C} has confidence c2, then
The inference rule is modus ponens: ϕ , ϕ → χ χ {\displaystyle {\frac {\phi ,\ \phi \to \chi }{\chi }}} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement Connectivity-Based Community Ranking by doing the following: - Compute a meta graph where nodes are communities and edges denote inter-connections across communities. - Add the weights of the inter-connections as weights to the edges. - Compute `pagerank` on the meta graph. - Hint: `w_matrix` is the confusion matrix of the weights among the communities. `w_matrix` is not symmetric.
A PageRank results from a mathematical algorithm based on the webgraph, created by all World Wide Web pages as nodes and hyperlinks as edges, taking into consideration authority hubs such as cnn.com or mayoclinic.org. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement Connectivity-Based Community Ranking by doing the following: - Compute a meta graph where nodes are communities and edges denote inter-connections across communities. - Add the weights of the inter-connections as weights to the edges. - Compute `pagerank` on the meta graph. - Hint: `w_matrix` is the confusion matrix of the weights among the communities. `w_matrix` is not symmetric.
An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
How does matrix factorization address the issue of missing ratings?
Non-negative matrix factorization (NMF) can take missing data while minimizing its cost function, rather than treating these missing data as zeros that could introduce biases. This makes it a mathematically proven method for data imputation. NMF can ignore missing data in the cost function, and the impact from missing data can be as small as a second order effect.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
How does matrix factorization address the issue of missing ratings?
In the case of the Netflix problem the ratings matrix is expected to be low-rank since user preferences can often be described by a few factors, such as the movie genre and time of release. Other applications include computer vision, where missing pixels in images need to be reconstructed, detecting the global positioning of sensors in a network from partial distance information, and multiclass learning. The matrix completion problem is in general NP-hard, but under additional assumptions there are efficient algorithms that achieve exact reconstruction with high probability. In statistical learning point of view, the matrix completion problem is an application of matrix regularization which is a generalization of vector regularization. For example, in the low-rank matrix completion problem one may apply the regularization penalty taking the form of a nuclear norm R ( X ) = λ ‖ X ‖ ∗ {\displaystyle R(X)=\lambda \|X\|_{*}}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
When constructing a word embedding, negative samples are
In natural language processing (NLP), a word embedding is a representation of a word. The embedding is used in text analysis. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that words that are closer in the vector space are expected to be similar in meaning. Word embeddings can be obtained using language modeling and feature learning techniques, where words or phrases from the vocabulary are mapped to vectors of real numbers. Methods to generate this mapping include neural networks, dimensionality reduction on the word co-occurrence matrix, probabilistic models, explainable knowledge base method, and explicit representation in terms of the context in which words appear.Word and phrase embeddings, when used as the underlying input representation, have been shown to boost the performance in NLP tasks such as syntactic parsing and sentiment analysis.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
When constructing a word embedding, negative samples are
Word embeddings may contain the biases and stereotypes contained in the trained dataset, as Bolukbasi et al. points out in the 2016 paper “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” that a publicly available (and popular) word2vec embedding trained on Google News texts (a commonly used data corpus), which consists of text written by professional journalists, still shows disproportionate word associations reflecting gender and racial biases when extracting word analogies. For example, one of the analogies generated using the aforementioned word embedding is “man is to computer programmer as woman is to homemaker”.The applications of these trained word embeddings without careful oversight likely perpetuates existing bias in society, which is introduced through unaltered training data. Furthermore, word embeddings can even amplify these biases (Zhao et al. 2017). Given word embeddings popular usage in NLP applications such as search ranking, CV parsing and recommendation systems, the biases that exist in pre-trained word embeddings may have further reaching impact than we realize.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following tasks would typically not be solved by clustering?
There are potential shortcomings for all existing clustering techniques. This may cause interpretation of results to become difficult, especially when there is no knowledge about the number of clusters. Clustering methods are also very sensitive to the initial clustering settings, which can cause non-significant data to be amplified in non-reiterative methods. An extremely important issue in cluster analysis is the validation of the clustering results, that is, how to gain confidence about the significance of the clusters provided by the clustering technique (cluster numbers and cluster assignments).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following tasks would typically not be solved by clustering?
Current clustering techniques do not address all the requirements adequately. Dealing with large number of dimensions and large number of data items can be problematic because of time complexity; Effectiveness of the method depends on the definition of "distance" (for distance-based clustering) If an obvious distance measure doesn't exist, we must "define" it, which is not always easy, especially in multidimensional spaces. The result of the clustering algorithm (that, in many cases, can be arbitrary itself) can be interpreted in different ways.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Following the notation used in class, let us denote the set of terms by $T=\{k_i|i=1,...,m\}$, the set of documents by $D=\{d_j |j=1,...,n\}$, and let $d_i=(w_{1j},w_{2j},...,w_{mj})$. We are also given a query $q=(w_{1q},w_{2q},...,w_{mq})$. In the lecture we studied that, $sim(q,d_j) = \sum^m_{i=1} \frac{w_{ij}}{|d_j|}\frac{w_{iq}}{|q|}$ . (1) Another way of looking at the information retrieval problem is using a probabilistic approach. The probabilistic view of information retrieval consists of determining the conditional probability $P(q|d_j)$ that for a given document $d_j$ the query by the user is $q$. So, practically in probabilistic retrieval when a query $q$ is given, for each document it is evaluated how probable it is that the query is indeed relevant for the document, which results in a ranking of the documents. In order to relate vector space retrieval to a probabilistic view of information retrieval, we interpret the weights in Equation (1) as follows: - $w_{ij}/|d_j|$ can be interpreted as the conditional probability $P(k_i|d_j)$ that for a given document $d_j$ the term $k_i$ is important (to characterize the document $d_j$). - $w_{iq}/|q|$ can be interpreted as the conditional probability $P(q|k_i)$ that for a given term $k_i$ the query posed by the user is $q$. Intuitively, $P(q|k_i)$ gives the amount of importance given to a particular term while querying. With this interpretation you can rewrite Equation (1) as follows: $sim(q,d_j) = \sum^m_{i=1} P(k_i|d_j)P(q|k_i)$ (2) Note that the model described in Question (a) provides a probabilistic interpretation for vector space retrieval where weights are interpreted as probabilities . Compare to the probabilistic retrieval model based on language models introduced in the lecture and discuss the differences.
Similarities are computed as probabilities that a document is relevant for a given query. Probabilistic theorems like the Bayes' theorem are often used in these models. Binary Independence Model Probabilistic relevance model on which is based the okapi (BM25) relevance function Uncertain inference Language models Divergence-from-randomness model Latent Dirichlet allocation Feature-based retrieval models view documents as vectors of values of feature functions (or just features) and seek the best way to combine these features into a single relevance score, typically by learning to rank methods. Feature functions are arbitrary functions of document and query, and as such can easily incorporate almost any other retrieval model as just another feature.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Following the notation used in class, let us denote the set of terms by $T=\{k_i|i=1,...,m\}$, the set of documents by $D=\{d_j |j=1,...,n\}$, and let $d_i=(w_{1j},w_{2j},...,w_{mj})$. We are also given a query $q=(w_{1q},w_{2q},...,w_{mq})$. In the lecture we studied that, $sim(q,d_j) = \sum^m_{i=1} \frac{w_{ij}}{|d_j|}\frac{w_{iq}}{|q|}$ . (1) Another way of looking at the information retrieval problem is using a probabilistic approach. The probabilistic view of information retrieval consists of determining the conditional probability $P(q|d_j)$ that for a given document $d_j$ the query by the user is $q$. So, practically in probabilistic retrieval when a query $q$ is given, for each document it is evaluated how probable it is that the query is indeed relevant for the document, which results in a ranking of the documents. In order to relate vector space retrieval to a probabilistic view of information retrieval, we interpret the weights in Equation (1) as follows: - $w_{ij}/|d_j|$ can be interpreted as the conditional probability $P(k_i|d_j)$ that for a given document $d_j$ the term $k_i$ is important (to characterize the document $d_j$). - $w_{iq}/|q|$ can be interpreted as the conditional probability $P(q|k_i)$ that for a given term $k_i$ the query posed by the user is $q$. Intuitively, $P(q|k_i)$ gives the amount of importance given to a particular term while querying. With this interpretation you can rewrite Equation (1) as follows: $sim(q,d_j) = \sum^m_{i=1} P(k_i|d_j)P(q|k_i)$ (2) Note that the model described in Question (a) provides a probabilistic interpretation for vector space retrieval where weights are interpreted as probabilities . Compare to the probabilistic retrieval model based on language models introduced in the lecture and discuss the differences.
Zhao and Callan (2010) were perhaps the first to quantitatively study the vocabulary mismatch problem in a retrieval setting. Their results show that an average query term fails to appear in 30-40% of the documents that are relevant to the user query. They also showed that this probability of mismatch is a central probability in one of the fundamental probabilistic retrieval models, the Binary Independence Model. They developed novel term weight prediction methods that can lead to potentially 50-80% accuracy gains in retrieval over strong keyword retrieval models. Further research along the line shows that expert users can use Boolean Conjunctive Normal Form expansion to improve retrieval performance by 50-300% over unexpanded keyword queries.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement cosine similarity between two vectors
In data analysis, cosine similarity is a measure of similarity between two non-zero vectors defined in an inner product space. Cosine similarity is the cosine of the angle between the vectors; that is, it is the dot product of the vectors divided by the product of their lengths. It follows that the cosine similarity does not depend on the magnitudes of the vectors, but only on their angle. The cosine similarity always belongs to the interval .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement cosine similarity between two vectors
Given two N-dimension vectors a {\displaystyle a} and b {\displaystyle b} , the soft cosine similarity is calculated as follows: s o f t _ c o s i n e 1 ⁡ ( a , b ) = ∑ i , j N s i j a i b j ∑ i , j N s i j a i a j ∑ i , j N s i j b i b j , {\displaystyle {\begin{aligned}\operatorname {soft\_cosine} _{1}(a,b)={\frac {\sum \nolimits _{i,j}^{N}s_{ij}a_{i}b_{j}}{{\sqrt {\sum \nolimits _{i,j}^{N}s_{ij}a_{i}a_{j}}}{\sqrt {\sum \nolimits _{i,j}^{N}s_{ij}b_{i}b_{j}}}}},\end{aligned}}} where sij = similarity(featurei, featurej). If there is no similarity between features (sii = 1, sij = 0 for i ≠ j), the given equation is equivalent to the conventional cosine similarity formula. The time complexity of this measure is quadratic, which makes it applicable to real-world tasks. Note that the complexity can be reduced to subquadratic. An efficient implementation of such soft cosine similarity is included in the Gensim open source library.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are given the following accident and weather data. Each line corresponds to one event: 1. car_accident rain lightning wind clouds fire 2. fire clouds rain lightning wind 3. car_accident fire wind 4. clouds rain wind 5. lightning fire rain clouds 6. clouds wind car_accident 7. rain lightning clouds fire 8. lightning fire car_accident (b) Find all the association rules for minimal support 0.6 and minimal confidence of 1.0 (certainty). Follow the apriori algorithm.
(t)atá = fire itá = rock, stone, metal, y = water, river yby = earth, ground ybytu = air, wind
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are given the following accident and weather data. Each line corresponds to one event: 1. car_accident rain lightning wind clouds fire 2. fire clouds rain lightning wind 3. car_accident fire wind 4. clouds rain wind 5. lightning fire rain clouds 6. clouds wind car_accident 7. rain lightning clouds fire 8. lightning fire car_accident (b) Find all the association rules for minimal support 0.6 and minimal confidence of 1.0 (certainty). Follow the apriori algorithm.
The Storm Prediction Center issues convective outlooks (AC), consisting of categorical and probabilistic forecasts describing the general threat of severe convective storms over the contiguous United States for the next six to 192 hours (Day 1 through Day 8). These outlooks are labeled and issued by day, and are issued up to five times per day.The categorical risks are TSTM (for Thunder Storm: light green shaded area – rendered as a brown line prior to April 2011 – indicating a risk for general thunderstorms), "MRGL" (for Marginal: darker green shaded area, indicating a very low but present risk of severe weather); "SLGT" (for Slight: yellow shaded area – previously rendered as a green line – indicating a slight risk of severe weather); "ENH" (for Enhanced: orange shaded area, which replaced the upper end of the SLGT category on October 22, 2014); "MDT" (for Moderate: red shaded area – previously rendered as a red line – indicating a moderate risk of severe weather); and "HIGH" (pink shaded area – previously a rendered as a fuchsia line – indicating a high risk of severe weather). Significant severe areas (referred to as "hatched areas" because of their representation on outlook maps) refer to a threat of increased storm intensity that is of "significant severe" levels (F2/EF2 or stronger tornado, 2 inches (5.1 cm) or larger hail, or 75 miles per hour (121 km/h) winds or greater).In April 2011, the SPC introduced a new graphical format for its categorical and probability outlooks, which included the shading of risk areas (with the colors corresponding to each category, as mentioned above, being changed as well) and population, county/parish/borough and interstate overlays.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In general, what is true regarding Fagin's algorithm?
Fagin's theorem is the oldest result of descriptive complexity theory, a branch of computational complexity theory that characterizes complexity classes in terms of logic-based descriptions of their problems rather than by the behavior of algorithms for solving those problems. The theorem states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP. It was proven by Ronald Fagin in 1973 in his doctoral thesis, and appears in his 1974 paper. The arity required by the second-order formula was improved (in one direction) in Lynch (1981), and several results of Grandjean have provided tighter bounds on nondeterministic random-access machines.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In general, what is true regarding Fagin's algorithm?
The algorithm inputs are A 1 . . .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements is correct in the context of  information extraction?
Information extraction is the part of a greater puzzle which deals with the problem of devising automatic methods for text management, beyond its transmission, storage and display. The discipline of information retrieval (IR) has developed automatic methods, typically of a statistical flavor, for indexing large document collections and classifying documents. Another complementary approach is that of natural language processing (NLP) which has solved the problem of modelling human language processing with considerable success when taking into account the magnitude of the task.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements is correct in the context of  information extraction?
Traditional information extraction is a technology of natural language processing, which extracts information from typically natural language texts and structures these in a suitable manner. The kinds of information to be identified must be specified in a model before beginning the process, which is why the whole process of traditional Information Extraction is domain dependent. The IE is split in the following five subtasks. Named entity recognition (NER) Coreference resolution (CO) Template element construction (TE) Template relation construction (TR) Template scenario production (ST)The task of named entity recognition is to recognize and to categorize all named entities contained in a text (assignment of a named entity to a predefined category).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement MAP score
ScoreCloud (Notation research)
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement MAP score
The web-based map collection includes:
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is correct?
Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.LSI is also an application of correspondence analysis, a multivariate statistical technique developed by Jean-Paul Benzécri in the early 1970s, to a contingency table built from word counts in documents.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is correct?
Latent semantic analysis (LSA, performing singular-value decomposition on the document-term matrix) can improve search results by disambiguating polysemous words and searching for synonyms of the query. However, searching in the high-dimensional continuous space is much slower than searching the standard trie data structure of search engines.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In vector space retrieval each row of the matrix M corresponds to
Matrix equivalence Matrix congruence Matrix similarity Matrix consimilarity Row equivalence
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In vector space retrieval each row of the matrix M corresponds to
Let A be an m × n matrix with entries in the real numbers whose row rank is r. Therefore, the dimension of the row space of A is r. Let x1, x2, …, xr be a basis of the row space of A. We claim that the vectors Ax1, Ax2, …, Axr are linearly independent. To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients c1, c2, …, cr: where v = c1x1 + c2x2 + ⋯ + crxr. We make two observations: (a) v is a linear combination of vectors in the row space of A, which implies that v belongs to the row space of A, and (b) since Av = 0, the vector v is orthogonal to every row vector of A and, hence, is orthogonal to every vector in the row space of A. The facts (a) and (b) together imply that v is orthogonal to itself, which proves that v = 0 or, by the definition of v, But recall that the xi were chosen as a basis of the row space of A and so are linearly independent.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is correct regarding prediction models?
Nearly any statistical model can be used for prediction purposes. Broadly speaking, there are two classes of predictive models: parametric and non-parametric. A third class, semi-parametric models, includes features of both. Parametric models make "specific assumptions with regard to one or more of the population parameters that characterize the underlying distribution(s)". Non-parametric models "typically involve fewer assumptions of structure and distributional form but usually contain strong assumptions about independencies".
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is correct regarding prediction models?
Without such means of combining predictions, errors tend to multiply. For example, imagine a large predictive model that is broken down into a series of submodels where the prediction of a given submodel is used as the input of another submodel, and that prediction is in turn used as the input into a third submodel, etc. If each submodel has 90% accuracy in its predictions, and there are five submodels in series, then the overall model has only 0.95 = 59% accuracy.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Applying SVD to a term-document matrix M. Each concept is represented in K
A rank-reduced, singular value decomposition is performed on the matrix to determine patterns in the relationships between the terms and concepts contained in the text. The SVD forms the foundation for LSI. It computes the term and document vector spaces by approximating the single term-frequency matrix, A {\displaystyle A} , into three other matrices— an m by r term-concept vector matrix T {\displaystyle T} , an r by r singular values matrix S {\displaystyle S} , and a n by r concept-document vector matrix, D {\displaystyle D} , which satisfy the following relations: A ≈ T S D T {\displaystyle A\approx TSD^{T}} T T T = I r D T D = I r {\displaystyle T^{T}T=I_{r}\quad D^{T}D=I_{r}} S 1 , 1 ≥ S 2 , 2 ≥ … ≥ S r , r > 0 S i , j = 0 where i ≠ j {\displaystyle S_{1,1}\geq S_{2,2}\geq \ldots \geq S_{r,r}>0\quad S_{i,j}=0\;{\text{where}}\;i\neq j} In the formula, A is the supplied m by n weighted matrix of term frequencies in a collection of text where m is the number of unique terms, and n is the number of documents. T is a computed m by r matrix of term vectors where r is the rank of A—a measure of its unique dimensions ≤ min(m,n).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Applying SVD to a term-document matrix M. Each concept is represented in K
The computed Tk and Dk matrices define the term and document vector spaces, which with the computed singular values, Sk, embody the conceptual information derived from the document collection. The similarity of terms or documents within these spaces is a factor of how close they are to each other in these spaces, typically computed as a function of the angle between the corresponding vectors. The same steps are used to locate the vectors representing the text of queries and new documents within the document space of an existing LSI index. By a simple transformation of the A = T S DT equation into the equivalent D = AT T S−1 equation, a new vector, d, for a query or for a new document can be created by computing a new column in A and then multiplying the new column by T S−1.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
An HMM model would not be an appropriate approach to identify
The disadvantages of such models are: (1) The types of prior distributions that can be placed on hidden states are severely limited; (2) It is not possible to predict the probability of seeing an arbitrary observation. This second limitation is often not an issue in practice, since many common usages of HMM's do not require such predictive probabilities. A variant of the previously described discriminative model is the linear-chain conditional random field.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
An HMM model would not be an appropriate approach to identify
Several inference problems are associated with hidden Markov models, as outlined below.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is NOT an (instance-level) ontology?
Ontologies vary on whether classes can contain other classes, whether a class can belong to itself, whether there is a universal class (that is, a class containing everything), etc. Sometimes restrictions along these lines are made in order to avoid certain well-known paradoxes.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is NOT an (instance-level) ontology?
"Constructing an Ontology". Topics on General and Formal Ontology. 15–26.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is your take on the accuracy obtained in an unballanced dataset? Do you think accuracy is the correct evaluation metric for this task? If yes, justify! If not, why not, and what else can be used?
Accuracy can be a misleading metric for imbalanced data sets. Consider a sample with 95 negative and 5 positive values. Classifying all values as negative in this case gives 0.95 accuracy score. There are many metrics that don't suffer from this problem.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is your take on the accuracy obtained in an unballanced dataset? Do you think accuracy is the correct evaluation metric for this task? If yes, justify! If not, why not, and what else can be used?
Accuracy is also used as a statistical measure of how well a binary classification test correctly identifies or excludes a condition. That is, the accuracy is the proportion of correct predictions (both true positives and true negatives) among the total number of cases examined. As such, it compares estimates of pre- and post-test probability. To make the context clear by the semantics, it is often referred to as the "Rand accuracy" or "Rand index".
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
When using linear regression, which techniques improve your result? (One or multiple answers)
For more than one parameter the method extends in a direct manner. After checking that the model has been improved this process can be repeated until convergence. This approach has the advantages that it does not need the parameters q to be able to be determined from an individual data set and the linear regression is on the original error terms
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
When using linear regression, which techniques improve your result? (One or multiple answers)
(multi-)linear regression. Important steps during the development of a new method are: Evaluation of the quality of available experimental data, elimination of wrong data, finding of outliers. Construction of groups.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
For students born in April, how many months older are they than the average student in their grade? 5.4898 months For students born in March, how many months younger are they than the average student in their grade? 5.5102 months Discuss: Considering your common sense and the results obtained from the simulation: what advantage do students born in April have over those born in March? How may this affect their odds of becoming professional athletes?
⋅ 365 n {\displaystyle 1-{\frac {365! }{(365-n)!\cdot 365^{n}}}} .If the teacher had picked a specific day (say, 16 September), then the chance that at least one student was born on that specific day is 1 − ( 364 / 365 ) 30 {\displaystyle 1-(364/365)^{30}} , about 7.9%.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
For students born in April, how many months older are they than the average student in their grade? 5.4898 months For students born in March, how many months younger are they than the average student in their grade? 5.5102 months Discuss: Considering your common sense and the results obtained from the simulation: what advantage do students born in April have over those born in March? How may this affect their odds of becoming professional athletes?
Children born between March and August would start school at the age of five years and those born between September and February start school at age four-and-a-half. Pupils remain at primary school for seven years completing Primary One to Seven. Then aged eleven or twelve, pupils start secondary school for a compulsory period of four years, with a final two years thereafter being optional.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The data contains information about submissions to a prestigious machine learning conference called ICLR. Columns: year, paper, authors, ratings, decisions, institution, csranking, categories, authors_citations, authors_publications, authors_hindex, arxiv. The data is stored in a pandas.DataFrame format. Create another field entitled reputation capturing how famous the last author of the paper is. Notice that the last author of the paper is usually the most senior person involved in the project. This field should equal log10(#𝑐𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑠#𝑝𝑢𝑏𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑠+1). Notice that each author in the dataset has at least 1 publication, so you don't risk dividing by 0.
The International Conference on Learning Representations (ICLR) is a machine learning conference typically held in late April or early May each year. The conference includes invited talks as well as oral and poster presentations of refereed papers. Since its inception in 2013, ICLR has employed an open peer review process to referee paper submissions (based on models proposed by Yann LeCun). In 2019, there were 1591 paper submissions, of which 500 accepted with poster presentations (31%) and 24 with oral presentations (1.5%).. In 2021, there were 2997 paper submissions, of which 860 were accepted (29%)..
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The data contains information about submissions to a prestigious machine learning conference called ICLR. Columns: year, paper, authors, ratings, decisions, institution, csranking, categories, authors_citations, authors_publications, authors_hindex, arxiv. The data is stored in a pandas.DataFrame format. Create another field entitled reputation capturing how famous the last author of the paper is. Notice that the last author of the paper is usually the most senior person involved in the project. This field should equal log10(#𝑐𝑖𝑡𝑎𝑡𝑖𝑜𝑛𝑠#𝑝𝑢𝑏𝑙𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑠+1). Notice that each author in the dataset has at least 1 publication, so you don't risk dividing by 0.
To capture such variation, some experiments use sequences or patterns over observations rather than average observed frequencies, noting e.g. that an author shows a preference for a certain stress or emphasis pattern, or that an author tends to follow a sequence of long sentences with a short one. One of the very first approaches to authorship identification, by Mendenhall, can be said to aggregate its observations without averaging them. More recent authorship attribution models use vector space models to automatically capture what is specific to an author's style, but they also rely on judicious feature engineering for the same reasons as more traditional models.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is our final goal in machine learning? (One answer)
Proc. 10th Conf. on Machine Learning.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is our final goal in machine learning? (One answer)
In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from input data. These input data used to build the model are usually divided into multiple data sets.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
For binary classification, which of the following methods can achieve perfect training accuracy on \textbf{all} linearly separable datasets?
There are two broad classes of methods for determining the parameters of a linear classifier w → {\displaystyle {\vec {w}}} . They can be generative and discriminative models. Methods of the former model joint probability distribution, whereas methods of the latter model conditional density functions P ( c l a s s | x → ) {\displaystyle P({\rm {class}}|{\vec {x}})} . Examples of such algorithms include: Linear Discriminant Analysis (LDA)—assumes Gaussian conditional density models Naive Bayes classifier with multinomial or multivariate Bernoulli event models.The second set of methods includes discriminative models, which attempt to maximize the quality of the output on a training set.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
For binary classification, which of the following methods can achieve perfect training accuracy on \textbf{all} linearly separable datasets?
Several approaches to learning B {\displaystyle \mathbf {B} } from data have been proposed. These include: performing a preliminary inference step to estimate B {\displaystyle \mathbf {B} } from the training data, a proposal to learn B {\displaystyle \mathbf {B} } and f {\displaystyle \mathbf {f} } together based on the cluster regularizer, and sparsity-based approaches which assume only a few of the features are needed.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
how can the results from a classifier impact the metric (precision) used? What could be a better suited metric to use with imbalanced data?
Accuracy can be a misleading metric for imbalanced data sets. Consider a sample with 95 negative and 5 positive values. Classifying all values as negative in this case gives 0.95 accuracy score. There are many metrics that don't suffer from this problem.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
how can the results from a classifier impact the metric (precision) used? What could be a better suited metric to use with imbalanced data?
The measures precision and recall are popular metrics used to evaluate the quality of a classification system. More recently, receiver operating characteristic (ROC) curves have been used to evaluate the tradeoff between true- and false-positive rates of classification algorithms. As a performance metric, the uncertainty coefficient has the advantage over simple accuracy in that it is not affected by the relative sizes of the different classes. Further, it will not penalize an algorithm for simply rearranging the classes.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A model predicts $\mathbf{\hat{y}} = [1, 0, 1, 1, 1]$. The ground truths are $\mathbf{y} = [1, 0, 0, 1, 1]$. What is the accuracy?
The above may be generalized to the cases where the weights are not identical and/or the errors are correlated. Suppose that the covariance matrix of the errors is Σ. Then since β ^ GLS = ( X T Σ − 1 X ) − 1 X T Σ − 1 y {\displaystyle {\hat {\mathbf {\beta } }}_{\text{GLS}}=\left(\mathbf {X} ^{\textsf {T}}\mathbf {\Sigma } ^{-1}\mathbf {X} \right)^{-1}\mathbf {X} ^{\textsf {T}}\mathbf {\Sigma } ^{-1}\mathbf {y} } .the hat matrix is thus H = X ( X T Σ − 1 X ) − 1 X T Σ − 1 {\displaystyle \mathbf {H} =\mathbf {X} \left(\mathbf {X} ^{\textsf {T}}\mathbf {\Sigma } ^{-1}\mathbf {X} \right)^{-1}\mathbf {X} ^{\textsf {T}}\mathbf {\Sigma } ^{-1}} and again it may be seen that H 2 = H ⋅ H = H {\displaystyle H^{2}=H\cdot H=H} , though now it is no longer symmetric.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A model predicts $\mathbf{\hat{y}} = [1, 0, 1, 1, 1]$. The ground truths are $\mathbf{y} = [1, 0, 0, 1, 1]$. What is the accuracy?
{\displaystyle \mu _{R_{0}}^{\text{g}}(\mathbf {x} -{\hat {\mathbf {x} }}_{j})={\frac {1}{(2\pi R_{0}^{2})^{3/2}}}\,\exp \left(-{\frac {(\mathbf {x} -{\hat {\mathbf {x} }}_{j})^{2}}{2R_{0}^{2}}}\right).} Choosing one or another distribution μ R 0 ( x − x ^ j ) {\displaystyle \mu _{R_{0}}(\mathbf {x} -{\hat {\mathbf {x} }}_{j})} does not affect significantly the predictions of the model, as long as the same value for R 0 {\displaystyle R_{0}} is considered.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
K-Means:
k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid), serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. k-means clustering minimizes within-cluster variances (squared Euclidean distances), but not regular Euclidean distances, which would be the more difficult Weber problem: the mean optimizes squared errors, whereas only the geometric median minimizes Euclidean distances.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
K-Means:
Machine Learning: A Guide to Current Research. Kluwer Academic Publishers. 1990.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The k-means algorithm for clustering is guaranteed to converge to a local optimum.
The classical k-means algorithm and its variations are known to only converge to local minima of the minimum-sum-of-squares clustering problem defined as Many studies have attempted to improve the convergence behavior of the algorithm and maximize the chances of attaining the global optimum (or at least, local minima of better quality). Initialization and restart techniques discussed in the previous sections are one alternative to find better solutions. More recently, global optimization algorithms based on branch-and-bound and semidefinite programming have produced ‘’provenly optimal’’ solutions for datasets with up to 4,177 entities and 20,531 features.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The k-means algorithm for clustering is guaranteed to converge to a local optimum.
As expected, due to the NP-hardness of the subjacent optimization problem, the computational time of optimal algorithms for K-means quickly increases beyond this size. Optimal solutions for small- and medium-scale still remain valuable as a benchmark tool, to evaluate the quality of other heuristics. To find high-quality local minima within a controlled computational time but without optimality guarantees, other works have explored metaheuristics and other global optimization techniques, e.g., based on incremental approaches and convex optimization, random swaps (i.e., iterated local search), variable neighborhood search and genetic algorithms. It is indeed known that finding better local minima of the minimum sum-of-squares clustering problem can make the difference between failure and success to recover cluster structures in feature spaces of high dimension.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is the algorithm to perform optimization with gradient descent? Actions between Start loop and End loop are performed multiple times. (One answer)
The gradient descent can be combined with a line search, finding the locally optimal step size γ {\displaystyle \gamma } on every iteration. Performing the line search can be time-consuming. Conversely, using a fixed small γ {\displaystyle \gamma } can yield poor convergence.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is the algorithm to perform optimization with gradient descent? Actions between Start loop and End loop are performed multiple times. (One answer)
In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the gradient (or approximate gradient) of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a local maximum of that function; the procedure is then known as gradient ascent. It is particularly useful in machine learning for minimizing the cost or loss function.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In terms of the \textbf{bias-variance} decomposition, a 1-nearest neighbor classifier has \rule{2cm}{0.15mm} than a 3-nearest neighbor classifier.
In the case of k-nearest neighbors regression, when the expectation is taken over the possible labeling of a fixed training set, a closed-form expression exists that relates the bias–variance decomposition to the parameter k:: 37, 223 E ⁡ = ( f ( x ) − 1 k ∑ i = 1 k f ( N i ( x ) ) ) 2 + σ 2 k + σ 2 {\displaystyle \operatorname {E} =\left(f(x)-{\frac {1}{k}}\sum _{i=1}^{k}f(N_{i}(x))\right)^{2}+{\frac {\sigma ^{2}}{k}}+\sigma ^{2}} where N 1 ( x ) , … , N k ( x ) {\displaystyle N_{1}(x),\dots ,N_{k}(x)} are the k nearest neighbors of x in the training set. The bias (first term) is a monotone rising function of k, while the variance (second term) drops off as k is increased. In fact, under "reasonable assumptions" the bias of the first-nearest neighbor (1-NN) estimator vanishes entirely as the size of the training set approaches infinity.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In terms of the \textbf{bias-variance} decomposition, a 1-nearest neighbor classifier has \rule{2cm}{0.15mm} than a 3-nearest neighbor classifier.
In the case of k-nearest neighbors regression, when the expectation is taken over the possible labeling of a fixed training set, a closed-form expression exists that relates the bias–variance decomposition to the parameter k:: 37, 223 E ⁡ = ( f ( x ) − 1 k ∑ i = 1 k f ( N i ( x ) ) ) 2 + σ 2 k + σ 2 {\displaystyle \operatorname {E} =\left(f(x)-{\frac {1}{k}}\sum _{i=1}^{k}f(N_{i}(x))\right)^{2}+{\frac {\sigma ^{2}}{k}}+\sigma ^{2}} where N 1 ( x ) , … , N k ( x ) {\displaystyle N_{1}(x),\dots ,N_{k}(x)} are the k nearest neighbors of x in the training set. The bias (first term) is a monotone rising function of k, while the variance (second term) drops off as k is increased. In fact, under "reasonable assumptions" the bias of the first-nearest neighbor (1-NN) estimator vanishes entirely as the size of the training set approaches infinity.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The training loss of the 1-nearest neighbor classifier is always zero.
There are many results on the error rate of the k nearest neighbour classifiers. The k-nearest neighbour classifier is strongly (that is for any joint distribution on ( X , Y ) {\displaystyle (X,Y)} ) consistent provided k := k n {\displaystyle k:=k_{n}} diverges and k n / n {\displaystyle k_{n}/n} converges to zero as n → ∞ {\displaystyle n\to \infty } . Let C n k n n {\displaystyle C_{n}^{knn}} denote the k nearest neighbour classifier based on a training set of size n. Under certain regularity conditions, the excess risk yields the following asymptotic expansion for some constants B 1 {\displaystyle B_{1}} and B 2 {\displaystyle B_{2}} . The choice k ∗ = ⌊ B n 4 d + 4 ⌋ {\displaystyle k^{*}=\lfloor Bn^{\frac {4}{d+4}}\rfloor } offers a trade off between the two terms in the above display, for which the k ∗ {\displaystyle k^{*}} -nearest neighbour error converges to the Bayes error at the optimal (minimax) rate O ( n − 4 d + 4 ) {\displaystyle {\mathcal {O}}(n^{-{\frac {4}{d+4}}})} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The training loss of the 1-nearest neighbor classifier is always zero.
The simplest and most intuitive loss function for categorization is the misclassification loss, or 0–1 loss, which is 0 if f ( x i ) = y i {\displaystyle f(x_{i})=y_{i}} and 1 if f ( x i ) ≠ y i {\displaystyle f(x_{i})\neq y_{i}} , i.e. the Heaviside step function on − y i f ( x i ) {\displaystyle -y_{i}f(x_{i})} . However, this loss function is not convex, which makes the regularization problem very difficult to minimize computationally. Therefore, we look for convex substitutes for the 0–1 loss.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The test loss of the 1-nearest neighbor classifier is always zero.
For example, in the case of classification, the simple zero-one loss function is often sufficient. This corresponds simply to assigning a loss of 1 to any incorrect labeling and implies that the optimal classifier minimizes the error rate on independent test data (i.e. counting up the fraction of instances that the learned function h: X → Y {\displaystyle h:{\mathcal {X}}\rightarrow {\mathcal {Y}}} labels wrongly, which is equivalent to maximizing the number of correctly classified instances). The goal of the learning procedure is then to minimize the error rate (maximize the correctness) on a "typical" test set.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The test loss of the 1-nearest neighbor classifier is always zero.
There are many results on the error rate of the k nearest neighbour classifiers. The k-nearest neighbour classifier is strongly (that is for any joint distribution on ( X , Y ) {\displaystyle (X,Y)} ) consistent provided k := k n {\displaystyle k:=k_{n}} diverges and k n / n {\displaystyle k_{n}/n} converges to zero as n → ∞ {\displaystyle n\to \infty } . Let C n k n n {\displaystyle C_{n}^{knn}} denote the k nearest neighbour classifier based on a training set of size n. Under certain regularity conditions, the excess risk yields the following asymptotic expansion for some constants B 1 {\displaystyle B_{1}} and B 2 {\displaystyle B_{2}} . The choice k ∗ = ⌊ B n 4 d + 4 ⌋ {\displaystyle k^{*}=\lfloor Bn^{\frac {4}{d+4}}\rfloor } offers a trade off between the two terms in the above display, for which the k ∗ {\displaystyle k^{*}} -nearest neighbour error converges to the Bayes error at the optimal (minimax) rate O ( n − 4 d + 4 ) {\displaystyle {\mathcal {O}}(n^{-{\frac {4}{d+4}}})} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The test loss of logistic regression is always zero.
{\displaystyle {\begin{cases}-\ln p_{k}&{\text{ if }}y_{k}=1,\\-\ln(1-p_{k})&{\text{ if }}y_{k}=0.\end{cases}}} The log loss can be interpreted as the "surprisal" of the actual outcome y k {\displaystyle y_{k}} relative to the prediction p k {\displaystyle p_{k}} , and is a measure of information content. Note that log loss is always greater than or equal to 0, equals 0 only in case of a perfect prediction (i.e., when p k = 1 {\displaystyle p_{k}=1} and y k = 1 {\displaystyle y_{k}=1} , or p k = 0 {\displaystyle p_{k}=0} and y k = 0 {\displaystyle y_{k}=0} ), and approaches infinity as the prediction gets worse (i.e., when y k = 1 {\displaystyle y_{k}=1} and p k → 0 {\displaystyle p_{k}\to 0} or y k = 0 {\displaystyle y_{k}=0} and p k → 1 {\displaystyle p_{k}\to 1} ), meaning the actual outcome is "more surprising". Since the value of the logistic function is always strictly between zero and one, the log loss is always greater than zero and less than infinity. Note that unlike in a linear regression, where the model can have zero loss at a point by passing through a data point (and zero loss overall if all points are on a line), in a logistic regression it is not possible to have zero loss at any points, since y k {\displaystyle y_{k}} is either 0 or 1, but 0 < p k < 1 {\displaystyle 0
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The test loss of logistic regression is always zero.
The last term has constant expected value because the noise is uncorrelated and has zero mean. We can therefore drop both terms from the optimization.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Estimate the 95% confidence intervals of the geometric mean and the arithmetic mean of pageviews using bootstrap resampling. The data is given in a pandas.DataFrame called df and the respective column is called "pageviews". You can use the scipy.stats python library.
The bootstrap can be used to construct confidence intervals for Pearson's correlation coefficient. In the "non-parametric" bootstrap, n pairs (xi, yi) are resampled "with replacement" from the observed set of n pairs, and the correlation coefficient r is calculated based on the resampled data. This process is repeated a large number of times, and the empirical distribution of the resampled r values are used to approximate the sampling distribution of the statistic. A 95% confidence interval for ρ can be defined as the interval spanning from the 2.5th to the 97.5th percentile of the resampled r values.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Estimate the 95% confidence intervals of the geometric mean and the arithmetic mean of pageviews using bootstrap resampling. The data is given in a pandas.DataFrame called df and the respective column is called "pageviews". You can use the scipy.stats python library.
The simplest bootstrap method involves taking the original data set of heights, and, using a computer, sampling from it to form a new sample (called a 'resample' or bootstrap sample) that is also of size N. The bootstrap sample is taken from the original by using sampling with replacement (e.g. we might 'resample' 5 times from and get ), so, assuming N is sufficiently large, for all practical purposes there is virtually zero probability that it will be identical to the original "real" sample. This process is repeated a large number of times (typically 1,000 or 10,000 times), and for each of these bootstrap samples, we compute its mean (each of these is called a "bootstrap estimate"). We now can create a histogram of bootstrap means. This histogram provides an estimate of the shape of the distribution of the sample mean from which we can answer questions about how much the mean varies across samples. (The method here, described for the mean, can be applied to almost any other statistic or estimator.)
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You want to build a convolutional neural network to distinguish between types of cars in images. Your friend Alice, a biologist, has been working on a network to classify wildlife, which she calls WildNet. She spent several weeks training that network, and made it accessible to you. What can you do with it?
Convolutional neural network (CNN) is a regularized type of feed-forward neural network that learns feature engineering by itself via filters (or kernel) optimization. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution (or cross-correlation) kernels, only 25 neurons are required to process 5x5-sized tiles Higher-layer features are extracted from wider context windows, compared to lower-layer features.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You want to build a convolutional neural network to distinguish between types of cars in images. Your friend Alice, a biologist, has been working on a network to classify wildlife, which she calls WildNet. She spent several weeks training that network, and made it accessible to you. What can you do with it?
Convolutional neural networks (CNN) are a class of deep neural network whose architecture is based on shared weights of convolution kernels or filters that slide along input features, providing translation-equivariant responses known as feature maps. CNNs take advantage of the hierarchical pattern in data and assemble patterns of increasing complexity using smaller and simpler patterns discovered via their filters. Therefore, they are lower on a scale of connectivity and complexity.Convolutional networks were inspired by biological processes in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted region of the visual field known as the receptive field.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Whenever I want to use Z-Score standardization (also known as normalization), I should use the mean and standard deviation of the training set to normalize my training, validation, and test set.
This process of converting a raw score into a standard score is called standardizing or normalizing (however, "normalizing" can refer to many types of ratios; see Normalization for more). Standard scores are most commonly called z-scores; the two terms may be used interchangeably, as they are in this article. Other equivalent terms in use include z-value, z-statistic, normal score, standardized variable and pull in high energy physics.Computing a z-score requires knowledge of the mean and standard deviation of the complete population to which a data point belongs; if one only has a sample of observations from the population, then the analogous computation using the sample mean and sample standard deviation yields the t-statistic.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Whenever I want to use Z-Score standardization (also known as normalization), I should use the mean and standard deviation of the training set to normalize my training, validation, and test set.
In machine learning, we can handle various types of data, e.g. audio signals and pixel values for image data, and this data can include multiple dimensions. Feature standardization makes the values of each feature in the data have zero-mean (when subtracting the mean in the numerator) and unit-variance. This method is widely used for normalization in many machine learning algorithms (e.g., support vector machines, logistic regression, and artificial neural networks). The general method of calculation is to determine the distribution mean and standard deviation for each feature.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In deep learning, which of these are hyper-parameters?
Hyperparameters are various settings that are used to control the learning process. CNNs use more hyperparameters than a standard multilayer perceptron (MLP).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In deep learning, which of these are hyper-parameters?
Usually, but not always, hyperparameters cannot be learned using well known gradient based methods (such as gradient descent, LBFGS) - which are commonly employed to learn parameters. These hyperparameters are those parameters describing a model representation that cannot be learned by common optimization methods but nonetheless affect the loss function. An example would be the tolerance hyperparameter for errors in support vector machines.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is the mean squared error of $f$ for a sample, where $\textbf{x}$ is an input, $y$ a target and $f(\textbf{x},W)$ the mapping function ? (One answer)
Suppose that we have a training set consisting of a set of points x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} and real values y i {\displaystyle y_{i}} associated with each point x i {\displaystyle x_{i}} . We assume that there is a function f(x) such as y = f ( x ) + ε {\displaystyle y=f(x)+\varepsilon } , where the noise, ε {\displaystyle \varepsilon } , has zero mean and variance σ 2 {\displaystyle \sigma ^{2}} . We want to find a function f ^ ( x ; D ) {\displaystyle {\hat {f}}(x;D)} , that approximates the true function f ( x ) {\displaystyle f(x)} as well as possible, by means of some learning algorithm based on a training dataset (sample) D = { ( x 1 , y 1 ) … , ( x n , y n ) } {\displaystyle D=\{(x_{1},y_{1})\dots ,(x_{n},y_{n})\}} . We make "as well as possible" precise by measuring the mean squared error between y {\displaystyle y} and f ^ ( x ; D ) {\displaystyle {\hat {f}}(x;D)}: we want ( y − f ^ ( x ; D ) ) 2 {\displaystyle (y-{\hat {f}}(x;D))^{2}} to be minimal, both for x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} and for points outside of our sample.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is the mean squared error of $f$ for a sample, where $\textbf{x}$ is an input, $y$ a target and $f(\textbf{x},W)$ the mapping function ? (One answer)
Suppose that we have a training set consisting of a set of points x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} and real values y i {\displaystyle y_{i}} associated with each point x i {\displaystyle x_{i}} . We assume that there is a function f(x) such as y = f ( x ) + ε {\displaystyle y=f(x)+\varepsilon } , where the noise, ε {\displaystyle \varepsilon } , has zero mean and variance σ 2 {\displaystyle \sigma ^{2}} . We want to find a function f ^ ( x ; D ) {\displaystyle {\hat {f}}(x;D)} , that approximates the true function f ( x ) {\displaystyle f(x)} as well as possible, by means of some learning algorithm based on a training dataset (sample) D = { ( x 1 , y 1 ) … , ( x n , y n ) } {\displaystyle D=\{(x_{1},y_{1})\dots ,(x_{n},y_{n})\}} . We make "as well as possible" precise by measuring the mean squared error between y {\displaystyle y} and f ^ ( x ; D ) {\displaystyle {\hat {f}}(x;D)}: we want ( y − f ^ ( x ; D ) ) 2 {\displaystyle (y-{\hat {f}}(x;D))^{2}} to be minimal, both for x 1 , … , x n {\displaystyle x_{1},\dots ,x_{n}} and for points outside of our sample.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus