question
stringlengths
6
3.53k
text
stringlengths
17
2.05k
source
stringclasses
1 value
Let $\xv_1, . . . , \xv_N$ be a dataset of $N$ vectors in $\R^D$. Write down the covariance matrix of the dataset $\Xm = (\xv_1, . . . , \xv_N) \in \R^{D imes N}$, \emph{and} state its dimensions. Data is centered.
Here we deal with a set of 2D matrices ( X 1 , … , X n ) {\displaystyle (X_{1},\ldots ,X_{n})} . Suppose they are centered ∑ i X i = 0 {\textstyle \sum _{i}X_{i}=0} . We construct row–row and column–column covariance matrices F = ∑ i X i X i T {\displaystyle F=\sum _{i}X_{i}X_{i}^{\mathsf {T}}} and G = ∑ i X i T X i {\displaystyle G=\sum _{i}X_{i}^{\mathsf {T}}X_{i}} in exactly the same manner as in SVD, and compute their eigenvectors U {\displaystyle U} and V {\displaystyle V} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $\xv_1, . . . , \xv_N$ be a dataset of $N$ vectors in $\R^D$. Write down the covariance matrix of the dataset $\Xm = (\xv_1, . . . , \xv_N) \in \R^{D imes N}$, \emph{and} state its dimensions. Data is centered.
Let matrix X = {\displaystyle X=} contains the set of 1D vectors which have been centered. In PCA/SVD, we construct covariance matrix F {\displaystyle F} and Gram matrix G {\displaystyle G} F = X X T {\displaystyle F=XX^{\mathsf {T}}} , G = X T X , {\displaystyle G=X^{\mathsf {T}}X,} and compute their eigenvectors U = {\displaystyle U=} and V = {\displaystyle V=} . Since V V T = I {\displaystyle VV^{\mathsf {T}}=I} and U U T = I {\displaystyle UU^{\mathsf {T}}=I} we have X = U U T X V V T = U ( U T X V ) V T = U Σ V T . {\displaystyle X=UU^{\mathsf {T}}XVV^{\mathsf {T}}=U\left(U^{\mathsf {T}}XV\right)V^{\mathsf {T}}=U\Sigma V^{\mathsf {T}}.} If we retain only K {\displaystyle K} principal eigenvectors in U , V {\displaystyle U,V} , this gives low-rank approximation of X {\displaystyle X} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are doing your ML project. It is a regression task under a square loss. Your neighbor uses linear regression and least squares. You are smarter. You are using a neural net with 10 layers and activations functions $f(x)=3 x$. You have a powerful laptop but not a supercomputer. You are betting your neighbor a beer at Satellite who will have a substantially better scores. However, at the end it will essentially be a tie, so we decide to have two beers and both pay. What is the reason for the outcome of this bet?
Regression analysis also falls short in certain cases which are more difficult to model. For instance, in football, 3 or 7 points are typically scored at a time, so bets involving a final score frequently include combinations of these two numbers. However, a simple linear regression will not accurately model this.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
You are doing your ML project. It is a regression task under a square loss. Your neighbor uses linear regression and least squares. You are smarter. You are using a neural net with 10 layers and activations functions $f(x)=3 x$. You have a powerful laptop but not a supercomputer. You are betting your neighbor a beer at Satellite who will have a substantially better scores. However, at the end it will essentially be a tie, so we decide to have two beers and both pay. What is the reason for the outcome of this bet?
A bit is the amount of entropy in a bettable event with two possible outcomes and even odds. Obviously we could double our money if we knew beforehand for certain what the outcome of that event would be. Kelly's insight was that no matter how complicated the betting scenario is, we can use an optimum betting strategy, called the Kelly criterion, to make our money grow exponentially with whatever side information we are able to obtain. The value of this "illicit" side information is measured as mutual information relative to the outcome of the betable event: I ( X ; Y ) = E Y { D K L ( P ( X | Y ) ‖ P ( X | I ) ) } = E Y { D K L ( P ( X | side information Y ) ‖ P ( X | stated odds I ) ) } , {\displaystyle {\begin{aligned}I(X;Y)&=\mathbb {E} _{Y}\{D_{\mathrm {KL} }{\big (}P(X|Y)\|P(X|I){\big )}\}\\&=\mathbb {E} _{Y}\{D_{\mathrm {KL} }{\big (}P(X|{\textrm {side}}\ {\textrm {information}}\ Y)\|P(X|{\textrm {stated}}\ {\textrm {odds}}\ I){\big )}\},\end{aligned}}} where Y is the side information, X is the outcome of the betable event, and I is the state of the bookmaker's knowledge.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $f:\R^D ightarrow\R$ be an $L$-hidden layer multi-layer perceptron (MLP) such that \[ f(xv)=\sigma_{L+1}ig(\wv^ op\sigma_L(\Wm_L\sigma_{L-1}(\Wm_{L-1}\dots\sigma_1(\Wm_1xv)))ig), \] with $\wv\in\R^{M}$, $\Wm_1\in\R^{M imes D}$ and $\Wm_\ell\in\R^{M imes M}$ for $\ell=2,\dots, L$, and $\sigma_i$ for $i=1,\dots,L+1$ is an entry-wise activation function. For any MLP $f$ and a classification threshold $ au$ let $C_{f, au}$ be a binary classifier that outputs YES for a given input $xv$ if $f(xv) \leq au$ and NO otherwise. space{3mm} Assume $\sigma_{L+1}$ is the element-wise extbf{sigmoid} function and $C_{f, rac{1}{2}}$ is able to obtain a high accuracy on a given binary classification task $T$. Let $g$ be the MLP obtained by multiplying the parameters extbf{in the last layer} of $f$, i.e. $\wv$, by 2. Moreover, let $h$ be the MLP obtained by replacing $\sigma_{L+1}$ with element-wise extbf{ReLU}. Finally, let $q$ be the MLP obtained by doing both of these actions. Which of the following is true? ReLU(x) = max\{x, 0\} \ Sigmoid(x) = rac{1}{1 + e^{-x}}
Consider a multilayer perceptron (MLP) with one hidden layer and m {\displaystyle m} hidden units with mapping from input x ∈ R d {\displaystyle x\in R^{d}} to a scalar output described as F x ( W ~ , Θ ) = ∑ i = 1 m θ i ϕ ( x T w ~ ( i ) ) {\displaystyle F_{x}({\tilde {W}},\Theta )=\sum _{i=1}^{m}\theta _{i}\phi (x^{T}{\tilde {w}}^{(i)})} , where w ~ ( i ) {\displaystyle {\tilde {w}}^{(i)}} and θ i {\displaystyle \theta _{i}} are the input and output weights of unit i {\displaystyle i} correspondingly, and ϕ {\displaystyle \phi } is the activation function and is assumed to be a tanh function. The input and output weights could then be optimized with m i n W ~ , Θ ( f N N ( W ~ , Θ ) = E y , x ) {\displaystyle min_{{\tilde {W}},\Theta }(f_{NN}({\tilde {W}},\Theta )=E_{y,x})} , where l {\displaystyle l} is a loss function, W ~ = { w ~ ( 1 ) , . . .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let $f:\R^D ightarrow\R$ be an $L$-hidden layer multi-layer perceptron (MLP) such that \[ f(xv)=\sigma_{L+1}ig(\wv^ op\sigma_L(\Wm_L\sigma_{L-1}(\Wm_{L-1}\dots\sigma_1(\Wm_1xv)))ig), \] with $\wv\in\R^{M}$, $\Wm_1\in\R^{M imes D}$ and $\Wm_\ell\in\R^{M imes M}$ for $\ell=2,\dots, L$, and $\sigma_i$ for $i=1,\dots,L+1$ is an entry-wise activation function. For any MLP $f$ and a classification threshold $ au$ let $C_{f, au}$ be a binary classifier that outputs YES for a given input $xv$ if $f(xv) \leq au$ and NO otherwise. space{3mm} Assume $\sigma_{L+1}$ is the element-wise extbf{sigmoid} function and $C_{f, rac{1}{2}}$ is able to obtain a high accuracy on a given binary classification task $T$. Let $g$ be the MLP obtained by multiplying the parameters extbf{in the last layer} of $f$, i.e. $\wv$, by 2. Moreover, let $h$ be the MLP obtained by replacing $\sigma_{L+1}$ with element-wise extbf{ReLU}. Finally, let $q$ be the MLP obtained by doing both of these actions. Which of the following is true? ReLU(x) = max\{x, 0\} \ Sigmoid(x) = rac{1}{1 + e^{-x}}
Consider a multilayer perceptron (MLP) with one hidden layer and m {\displaystyle m} hidden units with mapping from input x ∈ R d {\displaystyle x\in R^{d}} to a scalar output described as F x ( W ~ , Θ ) = ∑ i = 1 m θ i ϕ ( x T w ~ ( i ) ) {\displaystyle F_{x}({\tilde {W}},\Theta )=\sum _{i=1}^{m}\theta _{i}\phi (x^{T}{\tilde {w}}^{(i)})} , where w ~ ( i ) {\displaystyle {\tilde {w}}^{(i)}} and θ i {\displaystyle \theta _{i}} are the input and output weights of unit i {\displaystyle i} correspondingly, and ϕ {\displaystyle \phi } is the activation function and is assumed to be a tanh function. The input and output weights could then be optimized with m i n W ~ , Θ ( f N N ( W ~ , Θ ) = E y , x ) {\displaystyle min_{{\tilde {W}},\Theta }(f_{NN}({\tilde {W}},\Theta )=E_{y,x})} , where l {\displaystyle l} is a loss function, W ~ = { w ~ ( 1 ) , . . .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In this exercise, we will see how to combine the Principal Component Analysis (PCA) and the kernel method into an algorithm known as kernel PCA. We are given $n$ observations in a low dimensional space $\mathbf{x}_{1}, \cdots, \mathbf{x}_{n} \in \mathbb{R}^{L}$ and we consider a kernel $k$ and its associated features $\operatorname{map} \phi: \mathbb{R}^{L} \mapsto \mathbb{R}^{H}$ which satisfies: $$ k(\mathbf{x}, \mathbf{y})=\langle\phi(\mathbf{x}), \phi(\mathbf{y})\rangle_{\mathbb{R}^{H}} $$ where $\langle\cdot, \cdot\rangle_{\mathbb{R}^{H}}$ is the standard scalar product of $\mathbb{R}^{H}$. We define the empirical covariance matrix and the empirical covariance matrix of the mapped observations as: $$ \boldsymbol{\Sigma}:=\frac{1}{n} \sum_{i=1}^{n} \mathbf{x}_{i} \mathbf{x}_{i}^{\top} \quad \text { and } \quad \boldsymbol{\Sigma}^{\mathbf{H}}:=\frac{1}{n} \sum_{i=1}^{n} \phi\left(\mathbf{x}_{i}\right) \phi\left(\mathbf{x}_{i}\right)^{\top} $$ The kernel matrix $\mathbf{K}$ is defined by: $$ \mathbf{K}_{i, j}:=k\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right)=\left\langle\phi\left(\mathbf{x}_{i}\right), \phi\left(\mathbf{x}_{j}\right)\right\rangle_{\mathbb{R}^{H}} $$ We also define the data matrix and the corresponding matrix of the mapped data as: $$ \mathbf{X}:=\left(\begin{array}{c} \mathbf{x}_{1}^{\top} \\ \cdots \\ \mathbf{x}_{n}^{\top} \end{array}\right) \in \mathbb{R}^{n \times L} \quad \text { and } \quad \mathbf{\Phi}:=\left(\begin{array}{c} \phi\left(\mathbf{x}_{1}\right)^{\top} \\ \cdots \\ \phi\left(\mathbf{x}_{n}\right)^{\top} \end{array}\right) \in \mathbb{R}^{n \times H} . $$ Finally we denote the eigenpairs (eigenvalues and eigenvectors) of $\boldsymbol{\Sigma}^{\mathbf{H}}$ by $\left\{\left(\lambda_{i}, \mathbf{v}_{i}\right)\right\}_{i=1}^{H}$ and those of $\mathbf{K}$ by $\left\{\left(\rho_{j}, \mathbf{w}_{j}\right)\right\}_{j=1}^{n}$. We also assume that the vectors $\mathbf{v}_{i}$ and $\mathbf{w}_{j}$ are normalized. Thus: $$ \boldsymbol{\Sigma}^{\mathbf{H}} \mathbf{v}_{i}=\lambda_{i} \mathbf{v}_{i}, \quad\left\|\mathbf{v}_{i}\right\|_{2}=1 \quad \text { and } \quad \mathbf{K} \mathbf{w}_{j}=\rho_{j} \mathbf{w}_{j}, \quad\left\|\mathbf{w}_{j}\right\|_{2}=1 $$ Let us remind that we assume in the kernel setting that we can compute $k(\mathbf{x}, \mathbf{y})$ but that we cannot directly compute $\phi(\mathbf{x})$ What we would like to do is to first map the data into the high-dimensional space using the features map $\phi$ and then to apply the standard PCA algorithm in the high-dimensional space $\mathbb{R}^{H}$. This would amount to: (a) Computing the empirical covariance matrix $\boldsymbol{\Sigma}^{\mathbf{H}}$ of the mapped data $\phi\left(\mathbf{x}_{i}\right)$. (b) Computing the eigenvectors $\mathbf{v}_{1}, \cdots, \mathbf{v}_{N}$ associated with the $N$ largest eigenvalues of $\boldsymbol{\Sigma}^{\mathbf{H}}$. (c) Computing the projection $\Pi\left(\phi\left(\mathbf{x}_{i}\right)\right) \in \mathbb{R}^{L}$ for each data point onto these eigenvectors, where the $j$-th component of the projection is given by: $$ \Pi_{j}\left(\phi\left(\mathbf{x}_{i}\right)\right)=\left\langle\phi\left(\mathbf{x}_{i}\right), \mathbf{v}_{j}\right\rangle_{\mathbb{R}^{H}} $$ Write the empirical covariance matrices $\boldsymbol{\Sigma}$ and $\boldsymbol{\Sigma}^{\mathbf{H}}$ in function of the design matrix $\mathbf{X}$ and the features matrix $\boldsymbol{\Phi}$. What are the sizes of these matrices $\boldsymbol{\Sigma}$ and $\boldsymbol{\Sigma}^{\mathbf{H}}$ ?
Principal component analysis can be employed in a nonlinear way by means of the kernel trick. The resulting technique is capable of constructing nonlinear mappings that maximize the variance in the data. The resulting technique is called kernel PCA.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In this exercise, we will see how to combine the Principal Component Analysis (PCA) and the kernel method into an algorithm known as kernel PCA. We are given $n$ observations in a low dimensional space $\mathbf{x}_{1}, \cdots, \mathbf{x}_{n} \in \mathbb{R}^{L}$ and we consider a kernel $k$ and its associated features $\operatorname{map} \phi: \mathbb{R}^{L} \mapsto \mathbb{R}^{H}$ which satisfies: $$ k(\mathbf{x}, \mathbf{y})=\langle\phi(\mathbf{x}), \phi(\mathbf{y})\rangle_{\mathbb{R}^{H}} $$ where $\langle\cdot, \cdot\rangle_{\mathbb{R}^{H}}$ is the standard scalar product of $\mathbb{R}^{H}$. We define the empirical covariance matrix and the empirical covariance matrix of the mapped observations as: $$ \boldsymbol{\Sigma}:=\frac{1}{n} \sum_{i=1}^{n} \mathbf{x}_{i} \mathbf{x}_{i}^{\top} \quad \text { and } \quad \boldsymbol{\Sigma}^{\mathbf{H}}:=\frac{1}{n} \sum_{i=1}^{n} \phi\left(\mathbf{x}_{i}\right) \phi\left(\mathbf{x}_{i}\right)^{\top} $$ The kernel matrix $\mathbf{K}$ is defined by: $$ \mathbf{K}_{i, j}:=k\left(\mathbf{x}_{i}, \mathbf{x}_{j}\right)=\left\langle\phi\left(\mathbf{x}_{i}\right), \phi\left(\mathbf{x}_{j}\right)\right\rangle_{\mathbb{R}^{H}} $$ We also define the data matrix and the corresponding matrix of the mapped data as: $$ \mathbf{X}:=\left(\begin{array}{c} \mathbf{x}_{1}^{\top} \\ \cdots \\ \mathbf{x}_{n}^{\top} \end{array}\right) \in \mathbb{R}^{n \times L} \quad \text { and } \quad \mathbf{\Phi}:=\left(\begin{array}{c} \phi\left(\mathbf{x}_{1}\right)^{\top} \\ \cdots \\ \phi\left(\mathbf{x}_{n}\right)^{\top} \end{array}\right) \in \mathbb{R}^{n \times H} . $$ Finally we denote the eigenpairs (eigenvalues and eigenvectors) of $\boldsymbol{\Sigma}^{\mathbf{H}}$ by $\left\{\left(\lambda_{i}, \mathbf{v}_{i}\right)\right\}_{i=1}^{H}$ and those of $\mathbf{K}$ by $\left\{\left(\rho_{j}, \mathbf{w}_{j}\right)\right\}_{j=1}^{n}$. We also assume that the vectors $\mathbf{v}_{i}$ and $\mathbf{w}_{j}$ are normalized. Thus: $$ \boldsymbol{\Sigma}^{\mathbf{H}} \mathbf{v}_{i}=\lambda_{i} \mathbf{v}_{i}, \quad\left\|\mathbf{v}_{i}\right\|_{2}=1 \quad \text { and } \quad \mathbf{K} \mathbf{w}_{j}=\rho_{j} \mathbf{w}_{j}, \quad\left\|\mathbf{w}_{j}\right\|_{2}=1 $$ Let us remind that we assume in the kernel setting that we can compute $k(\mathbf{x}, \mathbf{y})$ but that we cannot directly compute $\phi(\mathbf{x})$ What we would like to do is to first map the data into the high-dimensional space using the features map $\phi$ and then to apply the standard PCA algorithm in the high-dimensional space $\mathbb{R}^{H}$. This would amount to: (a) Computing the empirical covariance matrix $\boldsymbol{\Sigma}^{\mathbf{H}}$ of the mapped data $\phi\left(\mathbf{x}_{i}\right)$. (b) Computing the eigenvectors $\mathbf{v}_{1}, \cdots, \mathbf{v}_{N}$ associated with the $N$ largest eigenvalues of $\boldsymbol{\Sigma}^{\mathbf{H}}$. (c) Computing the projection $\Pi\left(\phi\left(\mathbf{x}_{i}\right)\right) \in \mathbb{R}^{L}$ for each data point onto these eigenvectors, where the $j$-th component of the projection is given by: $$ \Pi_{j}\left(\phi\left(\mathbf{x}_{i}\right)\right)=\left\langle\phi\left(\mathbf{x}_{i}\right), \mathbf{v}_{j}\right\rangle_{\mathbb{R}^{H}} $$ Write the empirical covariance matrices $\boldsymbol{\Sigma}$ and $\boldsymbol{\Sigma}^{\mathbf{H}}$ in function of the design matrix $\mathbf{X}$ and the features matrix $\boldsymbol{\Phi}$. What are the sizes of these matrices $\boldsymbol{\Sigma}$ and $\boldsymbol{\Sigma}^{\mathbf{H}}$ ?
In the field of multivariate statistics, kernel principal component analysis (kernel PCA) is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are performed in a reproducing kernel Hilbert space.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
We will analyze the $K$-means algorithm and show that it always converge. Let us consider the $K$-means objective function: $$ \mathcal{L}(\mathbf{z}, \boldsymbol{\mu})=\sum_{n=1}^{N} \sum_{k=1}^{K} z_{n k}\left\|\mathbf{x}_{n}-\boldsymbol{\mu}_{k}\right\|_{2}^{2} $$ where $z_{n k} \in\{0,1\}$ with $\sum_{k=1}^{K} z_{n k}=1$ and $\boldsymbol{\mu}_{k} \in \mathbb{R}^{D}$ for $k=1, \ldots, K$ and $n=1, \ldots, N$. How would you choose $\left\{\boldsymbol{\mu}_{k}\right\}_{k=1}^{K}$ to minimize $\mathcal{L}(\mathbf{z}, \boldsymbol{\mu})$ for given $\left\{z_{n k}\right\}_{n, k=1}^{N, K}$ ? Compute the closed-form formula for the $\boldsymbol{\mu}_{k}$. To which step of the $K$-means algorithm does it correspond?
It can be shown that the algorithm will terminate in a finite number of iterations (no more than the total number of possible assignments, which is bounded by k m {\displaystyle k^{m}} ). In addition, the algorithm will terminate at a point that the overall objective cannot be decreased either by a different assignment or by defining new cluster planes for these clusters (such point is called "locally optimal" in the references). This convergence result is a consequence of the fact that problem (P2) can be solved exactly. The same convergence result holds for k-means algorithm because the cluster update problem can be solved exactly.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
We will analyze the $K$-means algorithm and show that it always converge. Let us consider the $K$-means objective function: $$ \mathcal{L}(\mathbf{z}, \boldsymbol{\mu})=\sum_{n=1}^{N} \sum_{k=1}^{K} z_{n k}\left\|\mathbf{x}_{n}-\boldsymbol{\mu}_{k}\right\|_{2}^{2} $$ where $z_{n k} \in\{0,1\}$ with $\sum_{k=1}^{K} z_{n k}=1$ and $\boldsymbol{\mu}_{k} \in \mathbb{R}^{D}$ for $k=1, \ldots, K$ and $n=1, \ldots, N$. How would you choose $\left\{\boldsymbol{\mu}_{k}\right\}_{k=1}^{K}$ to minimize $\mathcal{L}(\mathbf{z}, \boldsymbol{\mu})$ for given $\left\{z_{n k}\right\}_{n, k=1}^{N, K}$ ? Compute the closed-form formula for the $\boldsymbol{\mu}_{k}$. To which step of the $K$-means algorithm does it correspond?
Given a set of observations (x1, x2, ..., xn), where each observation is a d-dimensional real vector, k-means clustering aims to partition the n observations into k (≤ n) sets S = {S1, S2, ..., Sk} so as to minimize the within-cluster sum of squares (WCSS) (i.e. variance). Formally, the objective is to find: where μi is the mean (also called centroid) of points in S i {\displaystyle S_{i}} , i.e. | S i | {\displaystyle |S_{i}|} is the size of S i {\displaystyle S_{i}} , and ‖ ⋅ ‖ {\displaystyle \|\cdot \|} is the usual L2 norm . This is equivalent to minimizing the pairwise squared deviations of points in the same cluster: The equivalence can be deduced from identity | S i | ∑ x ∈ S i ‖ x − μ i ‖ 2 = 1 2 ∑ x , y ∈ S i ‖ x − y ‖ 2 {\textstyle |S_{i}|\sum _{\mathbf {x} \in S_{i}}\left\|\mathbf {x} -{\boldsymbol {\mu }}_{i}\right\|^{2}={\frac {1}{2}}\sum _{\mathbf {x} ,\mathbf {y} \in S_{i}}\left\|\mathbf {x} -\mathbf {y} \right\|^{2}} . Since the total variance is constant, this is equivalent to maximizing the sum of squared deviations between points in different clusters (between-cluster sum of squares, BCSS). This deterministic relationship is also related to the law of total variance in probability theory.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Assume we have $N$ training samples $(\xx_1, y_1), \dots, (\xx_N, y_N)$ where for each sample $i \in \{1, \dots, N\}$ we have that $\xx_i \in \R^d$ and $y_i \in \{-1, 1\}$. We want to classify the dataset using the exponential loss $L(\ww) = rac{1}{N} \sum_{i=1}^N \exp (-y_i \xx_i^ op \ww )$ for $\ww \in \R^d$. Which of the following statements is extbf{true}:
In both cases, it is assumed that the training set consists of a sample of independent and identically distributed pairs, ( x i , y i ) {\displaystyle (x_{i},\;y_{i})} . In order to measure how well a function fits the training data, a loss function L: Y × Y → R ≥ 0 {\displaystyle L:Y\times Y\to \mathbb {R} ^{\geq 0}} is defined. For training example ( x i , y i ) {\displaystyle (x_{i},\;y_{i})} , the loss of predicting the value y ^ {\displaystyle {\hat {y}}} is L ( y i , y ^ ) {\displaystyle L(y_{i},{\hat {y}})} . The risk R ( g ) {\displaystyle R(g)} of function g {\displaystyle g} is defined as the expected loss of g {\displaystyle g} . This can be estimated from the training data as R e m p ( g ) = 1 N ∑ i L ( y i , g ( x i ) ) {\displaystyle R_{emp}(g)={\frac {1}{N}}\sum _{i}L(y_{i},g(x_{i}))} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Assume we have $N$ training samples $(\xx_1, y_1), \dots, (\xx_N, y_N)$ where for each sample $i \in \{1, \dots, N\}$ we have that $\xx_i \in \R^d$ and $y_i \in \{-1, 1\}$. We want to classify the dataset using the exponential loss $L(\ww) = rac{1}{N} \sum_{i=1}^N \exp (-y_i \xx_i^ op \ww )$ for $\ww \in \R^d$. Which of the following statements is extbf{true}:
Consider a learning setting given by a probabilistic space ( X × Y , ρ ( X , Y ) ) {\displaystyle (X\times Y,\rho (X,Y))} , Y ∈ R {\displaystyle Y\in R} . Let S = { x i , y i } i = 1 n {\displaystyle S=\{x_{i},y_{i}\}_{i=1}^{n}} denote a training set of n {\displaystyle n} pairs i.i.d. with respect to ρ {\displaystyle \rho } . Let V: Y × R → [ 0 ; ∞ ) {\displaystyle V:Y\times R\rightarrow [0;\infty )} be a loss function.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is correct regarding Louvain algorithm?
The resulting communities displayed a sizable split in pelagic and benthic organisms. Two very common community detection algorithms for biological networks are the Louvain Method and Leiden Algorithm. The Louvain method is a greedy algorithm that attempts to maximize modularity, which favors heavy edges within communities and sparse edges between, within a set of nodes.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is correct regarding Louvain algorithm?
The Louvain method for community detection is a method to extract non-overlapping communities from large networks created by Blondel et al. from the University of Louvain (the source of this method's name). The method is a greedy optimization method that appears to run in time O ( n ⋅ log ⁡ n ) {\displaystyle O(n\cdot \log n)} where n {\displaystyle n} is the number of nodes in the network.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let the first four retrieved documents be N N R R, where N denotes a non-relevant and R a relevant document. Then the MAP (Mean Average Precision) is:
Mean average precision (MAP) for a set of queries is the mean of the average precision scores for each query. MAP = ∑ q = 1 Q A v e P ( q ) Q {\displaystyle \operatorname {MAP} ={\frac {\sum _{q=1}^{Q}\operatorname {AveP(q)} }{Q}}\!} where Q is the number of queries.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Let the first four retrieved documents be N N R R, where N denotes a non-relevant and R a relevant document. Then the MAP (Mean Average Precision) is:
Mean average precision (MAP) for a set of queries is the mean of the average precision scores for each query. MAP = ∑ q = 1 Q A v e P ( q ) Q {\displaystyle \operatorname {MAP} ={\frac {\sum _{q=1}^{Q}\operatorname {AveP(q)} }{Q}}\!} where Q is the number of queries.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement Community Influencers by doignt he following steps: - Isolate each community from the graph. - Select the node with the **maximum pagerank** within each community as the **influencer** of that community. - Break ties arbitrarily. - Hint: Useful functions: `nx.pagerank()`, `G.subgraph()`.
Sarma et al. describe two random walk-based distributed algorithms for computing PageRank of nodes in a network. One algorithm takes O ( log ⁡ n / ϵ ) {\displaystyle O(\log n/\epsilon )} rounds with high probability on any graph (directed or undirected), where n is the network size and ϵ {\displaystyle \epsilon } is the reset probability ( 1 − ϵ {\displaystyle 1-\epsilon } , which is called the damping factor) used in the PageRank computation. They also present a faster algorithm that takes O ( log ⁡ n / ϵ ) {\displaystyle O({\sqrt {\log n}}/\epsilon )} rounds in undirected graphs. In both algorithms, each node processes and sends a number of bits per round that are polylogarithmic in n, the network size.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement Community Influencers by doignt he following steps: - Isolate each community from the graph. - Select the node with the **maximum pagerank** within each community as the **influencer** of that community. - Break ties arbitrarily. - Hint: Useful functions: `nx.pagerank()`, `G.subgraph()`.
PageRank satisfies the following equation x i = α ∑ j a j i x j L ( j ) + 1 − α N , {\displaystyle x_{i}=\alpha \sum _{j}a_{ji}{\frac {x_{j}}{L(j)}}+{\frac {1-\alpha }{N}},} where L ( j ) = ∑ i a j i {\displaystyle L(j)=\sum _{i}a_{ji}} is the number of neighbors of node j {\displaystyle j} (or number of outbound links in a directed graph). Compared to eigenvector centrality and Katz centrality, one major difference is the scaling factor L ( j ) {\displaystyle L(j)} . Another difference between PageRank and eigenvector centrality is that the PageRank vector is a left hand eigenvector (note the factor a j i {\displaystyle a_{ji}} has indices reversed).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given a document collection with a vocabulary consisting of three words, $V = {a,b,c}$, and two documents $d_1$ = aabc and $d_2 = abc$. The query is $q = ab$. Using standard vector space retrieval, is it possible to enforce both a ranking $d_1 > d_2$ and $d_2 > d_1$ by adding suitable documents to the collection. If yes, give examples of such documents to be added, if no, provide an argument why this cannot be the case.
The vector space model has the following limitations: Long documents are poorly represented because they have poor similarity values (a small scalar product and a large dimensionality) Search keywords must precisely match document terms; word substrings might result in a "false positive match" Semantic sensitivity; documents with similar context but different term vocabulary won't be associated, resulting in a "false negative match". The order in which the terms appear in the document is lost in the vector space representation. Theoretically assumes terms are statistically independent. Weighting is intuitive but not very formal.Many of these difficulties can, however, be overcome by the integration of various tools, including mathematical techniques such as singular value decomposition and lexical databases such as WordNet.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given a document collection with a vocabulary consisting of three words, $V = {a,b,c}$, and two documents $d_1$ = aabc and $d_2 = abc$. The query is $q = ab$. Using standard vector space retrieval, is it possible to enforce both a ranking $d_1 > d_2$ and $d_2 > d_1$ by adding suitable documents to the collection. If yes, give examples of such documents to be added, if no, provide an argument why this cannot be the case.
The vector space model has the following advantages over the Standard Boolean model: Simple model based on linear algebra Term weights not binary Allows computing a continuous degree of similarity between queries and documents Allows ranking documents according to their possible relevance Allows partial matchingMost of these advantages are a consequence of the difference in the density of the document collection representation between Boolean and term frequency-inverse document frequency approaches. When using Boolean weights, any document lies in a vertex in a n-dimensional hypercube. Therefore, the possible document representations are 2 n {\displaystyle 2^{n}} and the maximum Euclidean distance between pairs is n {\displaystyle {\sqrt {n}}} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is true?
The hypothesis that (A) is true leads to the conclusion that (A) is false, a contradiction. If (A) is false, then "This statement is false" is false. Therefore, (A) must be true.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is true?
a multiple of q. A contradiction is now reached. In fact, more is true. With the sole exception of 4, where 3!
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The inverse document frequency of a term can increase
The inverse document frequency is a measure of how much information the word provides, i.e., if it is common or rare across all documents. It is the logarithmically scaled inverse fraction of the documents that contain the word (obtained by dividing the total number of documents by the number of documents containing the term, and then taking the logarithm of that quotient): i d f ( t , D ) = log ⁡ N | { d ∈ D: t ∈ d } | {\displaystyle \mathrm {idf} (t,D)=\log {\frac {N}{|\{d\in D:t\in d\}|}}} with N {\displaystyle N}: total number of documents in the corpus N = | D | {\displaystyle N={|D|}} | { d ∈ D: t ∈ d } | {\displaystyle |\{d\in D:t\in d\}|}: number of documents where the term t {\displaystyle t} appears (i.e., t f ( t , d ) ≠ 0 {\displaystyle \mathrm {tf} (t,d)\neq 0} ). If the term is not in the corpus, this will lead to a division-by-zero. It is therefore common to adjust the nominator 1 + N {\displaystyle 1+N} and denominator to 1 + | { d ∈ D: t ∈ d } | {\displaystyle 1+|\{d\in D:t\in d\}|} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The inverse document frequency of a term can increase
As documents are added to the document collection, the region defined by the hypercube's vertices become more populated and hence denser. Unlike Boolean, when a document is added using term frequency-inverse document frequency weights, the inverse document frequencies of the terms in the new document decrease while that of the remaining terms increase. In average, as documents are added, the region where documents lie expands regulating the density of the entire collection representation. This behavior models the original motivation of Salton and his colleagues that a document collection represented in a low density region could yield better retrieval results.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is wrong regarding Ontologies?
Many of those who doubt the possibility of developing wide agreement on a common upper ontology fall into one of two traps: they assert that there is no possibility of universal agreement on any conceptual scheme; but they argue that a practical common ontology does not need to have universal agreement, it only needs a large enough user community (as is the case for human languages) to make it profitable for developers to use it as a means to general interoperability, and for third-party developer to develop utilities to make it easier to use; and they point out that developers of data schemes find different representations congenial for their local purposes; but they do not demonstrate that these different representations are in fact logically inconsistent.In fact, different representations of assertions about the real world (though not philosophical models), if they accurately reflect the world, must be logically consistent, even if they focus on different aspects of the same physical object or phenomenon. If any two assertions about the real world are logically inconsistent, one or both must be wrong, and that is a topic for experimental investigation, not for ontological representation. In practice, representations of the real world are created as and known to be approximations to the basic reality, and their use is circumscribed by the limits of error of measurements in any given practical application. Ontologies are entirely capable of representing approximations, and are also capable of representing situations in which different approximations have different utility.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is wrong regarding Ontologies?
Many of those who doubt the possibility of developing wide agreement on a common upper ontology fall into one of two traps: they assert that there is no possibility of universal agreement on any conceptual scheme; but they argue that a practical common ontology does not need to have universal agreement, it only needs a large enough user community (as is the case for human languages) to make it profitable for developers to use it as a means to general interoperability, and for third-party developer to develop utilities to make it easier to use; and they point out that developers of data schemes find different representations congenial for their local purposes; but they do not demonstrate that these different representations are in fact logically inconsistent.In fact, different representations of assertions about the real world (though not philosophical models), if they accurately reflect the world, must be logically consistent, even if they focus on different aspects of the same physical object or phenomenon. If any two assertions about the real world are logically inconsistent, one or both must be wrong, and that is a topic for experimental investigation, not for ontological representation. In practice, representations of the real world are created as and known to be approximations to the basic reality, and their use is circumscribed by the limits of error of measurements in any given practical application. Ontologies are entirely capable of representing approximations, and are also capable of representing situations in which different approximations have different utility.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In a Ranked Retrieval result, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true (P@k and R@k are the precision and recall of the result set consisting of the k top ranked documents)?
For modern (web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them. Precision at k documents (P@k) is still a useful metric (e.g., P@10 or "Precision at 10" corresponds to the number of relevant results among the top 10 retrieved documents), but fails to take into account the positions of the relevant documents among the top k. Another shortcoming is that on a query with fewer relevant results than k, even a perfect system will have a score less than 1. It is easier to score manually since only the top k results need to be examined to determine if they are relevant or not.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In a Ranked Retrieval result, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true (P@k and R@k are the precision and recall of the result set consisting of the k top ranked documents)?
For modern (web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them. Precision at k documents (P@k) is still a useful metric (e.g., P@10 or "Precision at 10" corresponds to the number of relevant results among the top 10 retrieved documents), but fails to take into account the positions of the relevant documents among the top k. Another shortcoming is that on a query with fewer relevant results than k, even a perfect system will have a score less than 1. It is easier to score manually since only the top k results need to be examined to determine if they are relevant or not.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is true regarding Fagin's algorithm?
Fagin's theorem is the oldest result of descriptive complexity theory, a branch of computational complexity theory that characterizes complexity classes in terms of logic-based descriptions of their problems rather than by the behavior of algorithms for solving those problems. The theorem states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP. It was proven by Ronald Fagin in 1973 in his doctoral thesis, and appears in his 1974 paper. The arity required by the second-order formula was improved (in one direction) in Lynch (1981), and several results of Grandjean have provided tighter bounds on nondeterministic random-access machines.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is true regarding Fagin's algorithm?
Fagin's theorem is a result in descriptive complexity theory that states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP. It is remarkable since it is a characterization of the class NP that does not invoke a model of computation such as a Turing machine. The theorem was proven by Ronald Fagin in 1974 (strictly, in 1973 in his doctoral thesis). As a corollary, Jones and Selman showed that a set is a spectrum if and only if it is in the complexity class NEXP.One direction of the proof is to show that, for every first-order formula φ {\displaystyle \varphi } , the problem of determining whether there is a model of the formula of cardinality n is equivalent to the problem of satisfying a formula of size polynomial in n, which is in NP(n) and thus in NEXP of the input to the problem (the number n in binary form, which is a string of size log(n)).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is WRONG for Ontologies?
Many of those who doubt the possibility of developing wide agreement on a common upper ontology fall into one of two traps: they assert that there is no possibility of universal agreement on any conceptual scheme; but they argue that a practical common ontology does not need to have universal agreement, it only needs a large enough user community (as is the case for human languages) to make it profitable for developers to use it as a means to general interoperability, and for third-party developer to develop utilities to make it easier to use; and they point out that developers of data schemes find different representations congenial for their local purposes; but they do not demonstrate that these different representations are in fact logically inconsistent.In fact, different representations of assertions about the real world (though not philosophical models), if they accurately reflect the world, must be logically consistent, even if they focus on different aspects of the same physical object or phenomenon. If any two assertions about the real world are logically inconsistent, one or both must be wrong, and that is a topic for experimental investigation, not for ontological representation. In practice, representations of the real world are created as and known to be approximations to the basic reality, and their use is circumscribed by the limits of error of measurements in any given practical application. Ontologies are entirely capable of representing approximations, and are also capable of representing situations in which different approximations have different utility.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is WRONG for Ontologies?
Many of those who doubt the possibility of developing wide agreement on a common upper ontology fall into one of two traps: they assert that there is no possibility of universal agreement on any conceptual scheme; but they argue that a practical common ontology does not need to have universal agreement, it only needs a large enough user community (as is the case for human languages) to make it profitable for developers to use it as a means to general interoperability, and for third-party developer to develop utilities to make it easier to use; and they point out that developers of data schemes find different representations congenial for their local purposes; but they do not demonstrate that these different representations are in fact logically inconsistent.In fact, different representations of assertions about the real world (though not philosophical models), if they accurately reflect the world, must be logically consistent, even if they focus on different aspects of the same physical object or phenomenon. If any two assertions about the real world are logically inconsistent, one or both must be wrong, and that is a topic for experimental investigation, not for ontological representation. In practice, representations of the real world are created as and known to be approximations to the basic reality, and their use is circumscribed by the limits of error of measurements in any given practical application. Ontologies are entirely capable of representing approximations, and are also capable of representing situations in which different approximations have different utility.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is the benefit of LDA over LSI?
LDA is a generalization of older approach of probabilistic latent semantic analysis (pLSA), The pLSA model is equivalent to LDA under a uniform Dirichlet prior distribution. pLSA relies on only the first two assumptions above and does not care about the remainder. While both methods are similar in principle and require the user to specify the number of topics to be discovered before the start of training (as with K-means clustering) LDA has the following advantages over pLSA: LDA yields better disambiguation of words and a more precise assignment of documents to topics.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is the benefit of LDA over LSI?
Because of implementation costs, one may consider algorithms (like those that follow) that are similar to LRU, but which offer cheaper implementations. One important advantage of the LRU algorithm is that it is amenable to full statistical analysis. It has been proven, for example, that LRU can never result in more than N-times more page faults than OPT algorithm, where N is proportional to the number of pages in the managed pool.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Maintaining the order of document identifiers for vocabulary construction when partitioning the document collection is important
There is a taxonomic scheme associated with data collection systems, with readily-identifiable synonyms used by different industries and organizations. Cataloging the most commonly used and widely accepted vocabulary improves efficiencies, helps reduce variations, and improves data quality.The vocabulary of data collection systems stems from the fact that these systems are often a software representation of what would otherwise be a paper data collection form with a complex internal structure of sections and sub-sections. Modeling these structures and relationships in software yields technical terms describing the hierarchy of data containers, along with a set of industry-specific synonyms.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Maintaining the order of document identifiers for vocabulary construction when partitioning the document collection is important
Computational lexicons and dictionaries. In Encyclopaedia of Language and Linguistics (2nd ed. ), K. R. Brown, Ed.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is correct regarding Crowdsourcing?
Crowdsourcing is a type of participative online activity in which an individual, an institution, a nonprofit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number, the voluntary undertaking of a task via a flexible open call. The undertaking of the task, of variable complexity and modularity, and in which the crowd should participate, bringing their work, money, knowledge and/or experience, always entails mutual benefit. The user will receive the satisfaction of a given type of need, be it economic, social recognition, self-esteem, or the development of individual skills, while the crowdsourcer will obtain and use to their advantage that which the user has brought to the venture, whose form will depend on the type of activity undertaken. Caveats in pursuing a Crowdsourcing strategy are to induce a substantial market model or incentive, and care has to be taken that the whole thing doesn't end up in an open source anarchy of adware and spyware plagiates, with a lot of broken solutions, started by people who just wanted to try it out, then gave up early, and a few winners. Popular examples for Crowdsourcing are Linux, Google Android, the Pirate Party movement, and Wikipedia.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is correct regarding Crowdsourcing?
In contrast to outsourcing, crowdsourcing usually involves less specific and more public groups of participants.Advantages of using crowdsourcing include lowered costs, improved speed, improved quality, increased flexibility, and/or increased scalability of the work, as well as promoting diversity. Crowdsourcing methods include competitions, virtual labor markets, open online collaboration and data donation.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
When computing PageRank iteratively, the computation ends when...
PageRank can be computed either iteratively or algebraically. The iterative method can be viewed as the power iteration method or the power method. The basic mathematical operations performed are identical.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
When computing PageRank iteratively, the computation ends when...
Sarma et al. describe two random walk-based distributed algorithms for computing PageRank of nodes in a network. One algorithm takes O ( log ⁡ n / ϵ ) {\displaystyle O(\log n/\epsilon )} rounds with high probability on any graph (directed or undirected), where n is the network size and ϵ {\displaystyle \epsilon } is the reset probability ( 1 − ϵ {\displaystyle 1-\epsilon } , which is called the damping factor) used in the PageRank computation. They also present a faster algorithm that takes O ( log ⁡ n / ϵ ) {\displaystyle O({\sqrt {\log n}}/\epsilon )} rounds in undirected graphs. In both algorithms, each node processes and sends a number of bits per round that are polylogarithmic in n, the network size.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
How does LSI querying work?
Latent semantic indexing (LSI) is an indexing and retrieval method that uses a mathematical technique called singular value decomposition (SVD) to identify patterns in the relationships between the terms and concepts contained in an unstructured collection of text. LSI is based on the principle that words that are used in the same contexts tend to have similar meanings. A key feature of LSI is its ability to extract the conceptual content of a body of text by establishing associations between those terms that occur in similar contexts.LSI is also an application of correspondence analysis, a multivariate statistical technique developed by Jean-Paul Benzécri in the early 1970s, to a contingency table built from word counts in documents.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
How does LSI querying work?
The processing of a SELECT statement according to ANSI SQL would be the following:
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Vectorize the input with the Vector Space Model
Whereas the original problem may be stated in a finite-dimensional space, it often happens that the sets to discriminate are not linearly separable in that space. For this reason, it was proposed that the original finite-dimensional space be mapped into a much higher-dimensional space, presumably making the separation easier in that space. To keep the computational load reasonable, the mappings used by SVM schemes are designed to ensure that dot products of pairs of input data vectors may be computed easily in terms of the variables in the original space, by defining them in terms of a kernel function k ( x , y ) {\displaystyle k(x,y)} selected to suit the problem.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Vectorize the input with the Vector Space Model
Vector space model or term vector model is an algebraic model for representing text documents (and any objects, in general) as vectors of identifiers (such as index terms). It is used in information filtering, information retrieval, indexing and relevancy rankings. Its first use was in the SMART Information Retrieval System.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement the precision at k metric
m k = σ 1 m . . .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement the precision at k metric
1967. An introduction to metrication for printers. The London Printer 2, pp.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose that an item in a leaf node N exists in every path. Which one is correct?
Figure 2 shows the conceptually same red–black tree without these NIL leaves. To arrive at the same notion of a path, one must notice that e. g., 3 paths run through the node 1, namely a path through 1left plus 2 added paths through 1right, namely the paths through 6left and 6right. This way, these ends of the paths are also docking points for new nodes to be inserted, fully equivalent to the NIL leaves of figure 1.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose that an item in a leaf node N exists in every path. Which one is correct?
The main idea is first to find a leaf node N where the new object O belongs. If N is not full then just attach it to N. If N is full then invoke a method to split N. The algorithm is as follows:
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In a Ranked Retrieval result, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true (P@k and R@k are the precision and recall of the result set consisting of the k top ranked documents)?
For modern (web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them. Precision at k documents (P@k) is still a useful metric (e.g., P@10 or "Precision at 10" corresponds to the number of relevant results among the top 10 retrieved documents), but fails to take into account the positions of the relevant documents among the top k. Another shortcoming is that on a query with fewer relevant results than k, even a perfect system will have a score less than 1. It is easier to score manually since only the top k results need to be examined to determine if they are relevant or not.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In a Ranked Retrieval result, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true (P@k and R@k are the precision and recall of the result set consisting of the k top ranked documents)?
For modern (web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them. Precision at k documents (P@k) is still a useful metric (e.g., P@10 or "Precision at 10" corresponds to the number of relevant results among the top 10 retrieved documents), but fails to take into account the positions of the relevant documents among the top k. Another shortcoming is that on a query with fewer relevant results than k, even a perfect system will have a score less than 1. It is easier to score manually since only the top k results need to be examined to determine if they are relevant or not.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
For the number of times the apriori algorithm and the FPgrowth algorithm for association rule mining are scanning the transaction database the following is true
The Apriori algorithm was proposed by Agrawal and Srikant in 1994. Apriori is designed to operate on databases containing transactions (for example, collections of items bought by customers, or details of a website frequentation or IP addresses). Other algorithms are designed for finding association rules in data having no transactions (Winepi and Minepi), or having no timestamps (DNA sequencing). Each transaction is seen as a set of items (an itemset).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
For the number of times the apriori algorithm and the FPgrowth algorithm for association rule mining are scanning the transaction database the following is true
Apriori is an algorithm for frequent item set mining and association rule learning over relational databases. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often in the database. The frequent item sets determined by Apriori can be used to determine association rules which highlight general trends in the database: this has applications in domains such as market basket analysis.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given the following teleporting matrix (Ε) for nodes A, B and C:[0    ½    0][0     0    0][0    ½    1]and making no assumptions about the link matrix (R), which of the following is correct:(Reminder: columns are the probabilities to leave the respective node.)
These four possibilities are represented in matrix form by: μ a = , μ b = {\displaystyle {\boldsymbol {\mu }}_{a}=\left,\quad {\boldsymbol {\mu }}_{b}=\left} μ c = , μ d = {\displaystyle {\boldsymbol {\mu }}_{c}=\left,\quad {\boldsymbol {\mu }}_{d}=\left} Observe that node 6 can neither send nor receive under any of these possibilities. This might arise because node 6 is currently out of communication range. The weighted sum of rates for each of the 4 possibilities are: Choice (a): ∑ a b W a b ( t ) μ a b ( t ) = 12 {\displaystyle \sum _{ab}W_{ab}(t)\mu _{ab}(t)=12} .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given the following teleporting matrix (Ε) for nodes A, B and C:[0    ½    0][0     0    0][0    ½    1]and making no assumptions about the link matrix (R), which of the following is correct:(Reminder: columns are the probabilities to leave the respective node.)
Let an absorbing Markov chain with transition matrix P have t transient states and r absorbing states. Unlike a typical transition matrix, the rows of P represent sources, while columns represent destinations. Then P = , {\displaystyle P={\begin{bmatrix}Q&R\\\mathbf {0} &I_{r}\end{bmatrix}},} where Q is a t-by-t matrix, R is a nonzero t-by-r matrix, 0 is an r-by-t zero matrix, and Ir is the r-by-r identity matrix. Thus, Q describes the probability of transitioning from some transient state to another while R describes the probability of transitioning from some transient state to some absorbing state. The probability of transitioning from i to j in exactly k steps is the (i,j)-entry of Pk, further computed below. When considering only transient states, the probability found in the upper left of Pk, the (i,j)-entry of Qk.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following methods does not exploit statistics on the co-occurrence of words in a text?
For a more recent method, see Řehůřek and Kolkus (2009). This method can detect multiple languages in an unstructured piece of text and works robustly on short texts of only a few words: something that the n-gram approaches struggle with. An older statistical method by Grefenstette was based on the prevalence of certain function words (e.g., "the" in English). A common non-statistical intuitive approach (though highly uncertain) is to look for common letter combinations, or distinctive diacritics or punctuation.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following methods does not exploit statistics on the co-occurrence of words in a text?
These are considered by many to show promise but are not wholly accepted by traditionalists. However, they are not intended to replace older methods but to supplement them. Such statistical methods cannot be used to derive the features of a proto-language, apart from the fact of the existence of shared items of the compared vocabulary. These approaches have been challenged for their methodological problems, since without a reconstruction or at least a detailed list of phonological correspondences there can be no demonstration that two words in different languages are cognate.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement Item-based collaborative filtering using the following formula: \begin{equation} {r}_{x}(a) = \frac{\sum\limits_{b \in N_{I}(a)} sim(a, b) r_{x}(b)}{\sum\limits_{b \in N_{I}(a)}|sim(a, b)|} \end{equation} You will create a function that takes as input the ratings and the similarity matrix and gives as output the predicted ratings.
Collaborative filtering systems have many forms, but many common systems can be reduced to two steps: Look for users who share the same rating patterns with the active user (the user whom the prediction is for). Use the ratings from those like-minded users found in step 1 to calculate a prediction for the active userThis falls under the category of user-based collaborative filtering. A specific application of this is the user-based Nearest Neighbor algorithm. Alternatively, item-based collaborative filtering (users who bought x also bought y), proceeds in an item-centric manner: Build an item-item matrix determining relationships between pairs of items Infer the tastes of the current user by examining the matrix and matching that user's dataSee, for example, the Slope One item-based collaborative filtering family.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement Item-based collaborative filtering using the following formula: \begin{equation} {r}_{x}(a) = \frac{\sum\limits_{b \in N_{I}(a)} sim(a, b) r_{x}(b)}{\sum\limits_{b \in N_{I}(a)}|sim(a, b)|} \end{equation} You will create a function that takes as input the ratings and the similarity matrix and gives as output the predicted ratings.
Item-item collaborative filtering, or item-based, or item-to-item, is a form of collaborative filtering for recommender systems based on the similarity between items calculated using people's ratings of those items. Item-item collaborative filtering was invented and used by Amazon.com in 1998. It was first published in an academic conference in 2001.Earlier collaborative filtering systems based on rating similarity between users (known as user-user collaborative filtering) had several problems: systems performed poorly when they had many items but comparatively few ratings computing similarities between all pairs of users was expensive user profiles changed quickly and the entire system model had to be recomputedItem-item models resolve these problems in systems that have more users than items.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which attribute gives the best split?A1PNa44b44A2PNx51y33A3PNt61j23
FEN: r2qrb1k/1p1b2p1/p2ppn1p/8/3NP3/1BN5/PPP3QP/1K3RR1 w - - 0 1 21.e5!! dxe5 22.Ne4! Nh5 23.Qg6!? (stronger is 23.Qg4!!
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which attribute gives the best split?A1PNa44b44A2PNx51y33A3PNt61j23
FEN: r3qb1k/1b4p1/p2pr2p/3n4/Pnp1N1N1/6RP/1B3PP1/1B1QR1K1 w - - 0 1 26.Nxh6!! c3 (26... Rxh6 27.Nxd6 Qh5 (best) 28.Rg5! Qxd1 29.Nf7+ Kg8 30.Nxh6+ Kh8 31.Rxd1 c3 32.Nf7+ Kg8 33.Bg6! Nf4 34.Bxc3 Nxg6 35.Bxb4 Kxf7 36.Rd7+ Kf6 37.Rxg6+ Kxg6 38.Rxb7 +-) 27.Nf5!
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose that q is density reachable from p. The chain of points that ensure this relationship are {t,u,g,r} Which one is FALSE?
P satisfies the countable chain condition if every antichain in P is at most countable. This implies that V and V have the same cardinals (and the same cofinalities). A subset D of P is called dense if for every p ∈ P there is some q ∈ D with q ≤ p. A filter on P is a nonempty subset F of P such that if p < q and p ∈ F then q ∈ F, and if p ∈ F and q ∈ F then there is some r ∈ F with r ≤ p and r ≤ q. A subset G of P is called generic over M if it is a filter that meets every dense subset of P in M.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose that q is density reachable from p. The chain of points that ensure this relationship are {t,u,g,r} Which one is FALSE?
, v k = t {\displaystyle v_{0}=s,v_{1},v_{2},...,v_{k}=t} such that the edge ( v i − 1 , v i ) {\displaystyle (v_{i-1},v_{i})} is in E {\displaystyle E} for all 1 ≤ i ≤ k {\displaystyle 1\leq i\leq k} .If G {\displaystyle G} is acyclic, then its reachability relation is a partial order; any partial order may be defined in this way, for instance as the reachability relation of its transitive reduction. A noteworthy consequence of this is that since partial orders are anti-symmetric, if s {\displaystyle s} can reach t {\displaystyle t} , then we know that t {\displaystyle t} cannot reach s {\displaystyle s} . Intuitively, if we could travel from s {\displaystyle s} to t {\displaystyle t} and back to s {\displaystyle s} , then G {\displaystyle G} would contain a cycle, contradicting that it is acyclic. If G {\displaystyle G} is directed but not acyclic (i.e. it contains at least one cycle), then its reachability relation will correspond to a preorder instead of a partial order.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In User-Based Collaborative Filtering, which of the following is correct, assuming that all the ratings are positive?
These ratings can be viewed as an approximate representation of the user's interest in the corresponding domain. The system matches this user's ratings against other users' and finds the people with most "similar" tastes. With similar users, the system recommends items that the similar users have rated highly but not yet being rated by this user (presumably the absence of rating is often considered as the unfamiliarity of an item)A key problem of collaborative filtering is how to combine and weight the preferences of user neighbors. Sometimes, users can immediately rate the recommended items. As a result, the system gains an increasingly accurate representation of user preferences over time.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In User-Based Collaborative Filtering, which of the following is correct, assuming that all the ratings are positive?
In a recommendation system where everyone can give the ratings, people may give many positive ratings for their own items and negative ratings for their competitors'. It is often necessary for the collaborative filtering systems to introduce precautions to discourage such manipulations.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The term frequency of a term is normalized
In digital signal processing (DSP), a normalized frequency is a ratio of a variable frequency (f) and a constant frequency associated with a system (such as a sampling rate, fs). Some software applications require normalized inputs and produce normalized outputs, which can be re-scaled to physical units when necessary. Mathematical derivations are usually done in normalized units, relevant to a wide range of applications.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The term frequency of a term is normalized
The term "frequencies" refers to absolute numbers rather than already normalized values. The value of the test-statistic is χ 2 = ∑ i = 1 r ∑ j = 1 c ( O i , j − E i , j ) 2 E i , j {\displaystyle \chi ^{2}=\sum _{i=1}^{r}\sum _{j=1}^{c}{(O_{i,j}-E_{i,j})^{2} \over E_{i,j}}} = N ∑ i , j p i ⋅ p ⋅ j ( ( O i , j / N ) − p i ⋅ p ⋅ j p i ⋅ p ⋅ j ) 2 {\displaystyle \ \ \ \ =N\sum _{i,j}p_{i\cdot }p_{\cdot j}\left({\frac {(O_{i,j}/N)-p_{i\cdot }p_{\cdot j}}{p_{i\cdot }p_{\cdot j}}}\right)^{2}} Note that χ 2 {\displaystyle \chi ^{2}} is 0 if and only if O i , j = E i , j ∀ i , j {\displaystyle O_{i,j}=E_{i,j}\forall i,j} , i.e. only if the expected and true number of observations are equal in all cells. Fitting the model of "independence" reduces the number of degrees of freedom by p = r + c − 1.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which is an appropriate method for fighting skewed distributions of class labels in classification?
It was observed some conventional classification problems can be generalized in the framework of label ranking problem: if a training instance x {\displaystyle x\,\!} is labeled as class y i {\displaystyle y_{i}\,\!} , it implies that ∀ j ≠ i , y i ≻ x y j {\displaystyle \forall j\neq i,y_{i}\succ _{x}y_{j}\,\!}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which is an appropriate method for fighting skewed distributions of class labels in classification?
It was observed some conventional classification problems can be generalized in the framework of label ranking problem: if a training instance x {\displaystyle x\,\!} is labeled as class y i {\displaystyle y_{i}\,\!} , it implies that ∀ j ≠ i , y i ≻ x y j {\displaystyle \forall j\neq i,y_{i}\succ _{x}y_{j}\,\!}
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given a document collection with a vocabulary consisting of three words, $V = {a,b,c}$, and two documents $d_1$ = aabc and $d_2 = abc$. The query is $q = ab$. Is it possible to enforce a ranking $d_2 > d_1$ with vector space retrieval and $d_1 > d_2$ with probabilistic retrieval ($\lambda=0.5$), by adding the same documents to the collection? If yes, give examples of such documents to be added, if no, provide an argument why this cannot be the case.
Zhao and Callan (2010) were perhaps the first to quantitatively study the vocabulary mismatch problem in a retrieval setting. Their results show that an average query term fails to appear in 30-40% of the documents that are relevant to the user query. They also showed that this probability of mismatch is a central probability in one of the fundamental probabilistic retrieval models, the Binary Independence Model. They developed novel term weight prediction methods that can lead to potentially 50-80% accuracy gains in retrieval over strong keyword retrieval models. Further research along the line shows that expert users can use Boolean Conjunctive Normal Form expansion to improve retrieval performance by 50-300% over unexpanded keyword queries.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Given a document collection with a vocabulary consisting of three words, $V = {a,b,c}$, and two documents $d_1$ = aabc and $d_2 = abc$. The query is $q = ab$. Is it possible to enforce a ranking $d_2 > d_1$ with vector space retrieval and $d_1 > d_2$ with probabilistic retrieval ($\lambda=0.5$), by adding the same documents to the collection? If yes, give examples of such documents to be added, if no, provide an argument why this cannot be the case.
Classic information retrieval models such as the vector space model provide relevance ranking, but do not include document structure; only flat queries are supported. Also, they apply a static document concept, so retrieval units usually are entire documents. They can be extended to consider structural information and dynamic document retrieval. Examples for approaches extending the vector space models are available: they use document subtrees (index terms plus structure) as dimensions of the vector space.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Thang, Jeremie and Tugrulcan have built their own search engines. For a query Q, they got precision scores of 0.6, 0.7, 0.8  respectively. Their F1 scores (calculated by same parameters) are same. Whose search engine has a higher recall on Q?
For modern (web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them. Precision at k documents (P@k) is still a useful metric (e.g., P@10 or "Precision at 10" corresponds to the number of relevant results among the top 10 retrieved documents), but fails to take into account the positions of the relevant documents among the top k. Another shortcoming is that on a query with fewer relevant results than k, even a perfect system will have a score less than 1. It is easier to score manually since only the top k results need to be examined to determine if they are relevant or not.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Thang, Jeremie and Tugrulcan have built their own search engines. For a query Q, they got precision scores of 0.6, 0.7, 0.8  respectively. Their F1 scores (calculated by same parameters) are same. Whose search engine has a higher recall on Q?
For modern (web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them. Precision at k documents (P@k) is still a useful metric (e.g., P@10 or "Precision at 10" corresponds to the number of relevant results among the top 10 retrieved documents), but fails to take into account the positions of the relevant documents among the top k. Another shortcoming is that on a query with fewer relevant results than k, even a perfect system will have a score less than 1. It is easier to score manually since only the top k results need to be examined to determine if they are relevant or not.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
When compressing the adjacency list of a given URL, a reference list
Patterns include: == References ==
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
When compressing the adjacency list of a given URL, a reference list
For use as a data structure, the main alternative to the adjacency list is the adjacency matrix. Because each entry in the adjacency matrix requires only one bit, it can be represented in a very compact way, occupying only |V|2/8 bytes of contiguous space, where |V| is the number of vertices of the graph. Besides avoiding wasted space, this compactness encourages locality of reference. However, for a sparse graph, adjacency lists require less space, because they do not waste any space to represent edges that are not present.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement probabilistic estimation of kNN classification
The k-NN algorithm is a well known pattern recognition algorithm where a set of predetermined prototypes {pk} are used during the sample, or testing phase, of a supposed event. The prototypes model the events that are of interest in the application. The distance between each test vector and each prototype is calculated and the k test vectors closest to the prototype vectors are taken as the most likely classification or group of classifications. From there the probability that x belongs to the prototype event can be calculated. This approach, however, requires much memory and processing power as the number of prototypes increases and thus it is not a very practical choice for WSNs. It does however act as a good baseline to gauge performance of other classifiers since it is well known and that probability of misclassification when k=1 approaches twice the optimal Bayes error.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement probabilistic estimation of kNN classification
In fault detection and diagnosis, mathematical classification models which in fact belong to supervised learning methods, are trained on the training set of a labeled dataset to accurately identify the redundancies, faults and anomalous samples. During the past decades, there are different classification and preprocessing models that have been developed and proposed in this research area. K-nearest-neighbors algorithm (kNN) is one of the oldest techniques which has been used to solve fault detection and diagnosis problems. Despite the simple logic that this instance-based algorithm has, there are some problems with large dimensionality and processing time when it is used on large datasets.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Data being classified as unstructured or structured depends on the:
The term is imprecise for several reasons: Structure, while not formally defined, can still be implied. Data with some form of structure may still be characterized as unstructured if its structure is not helpful for the processing task at hand. Unstructured information might have some structure (semi-structured) or even be highly structured but in ways that are unanticipated or unannounced.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Data being classified as unstructured or structured depends on the:
Health data are classified as either structured or unstructured. Structured health data is standardized and easily transferable between health information systems. For example, a patient's name, date of birth, or a blood-test result can be recorded in a structured data format. Unstructured health data, unlike structured data, is not standardized.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
With negative sampling a set of negative samples is created for
A lower bound on the number of tests needed can be described using the notion of sample space, denoted S {\displaystyle {\mathcal {S}}} , which is simply the set of possible placements of defectives. For any group testing problem with sample space S {\displaystyle {\mathcal {S}}} and any group-testing algorithm, it can be shown that t ≥ ⌈ log 2 ⁡ | S | ⌉ {\displaystyle t\geq \lceil \log _{2}{|{\mathcal {S}}|}\rceil } , where t {\displaystyle t} is the minimum number of tests required to identify all defectives with a zero probability of error. This is called the information lower bound. This bound is derived from the fact that after each test, S {\displaystyle {\mathcal {S}}} is split into two disjoint subsets, each corresponding to one of the two possible outcomes of the test.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
With negative sampling a set of negative samples is created for
Suppose that we are in the one-sample situation and have the following thirteen observations: 0, 2, 3, 4, 6, 7, 8, 9, 11, 14, 15, 17, −18.The reduced sample procedure removes the zero. To the remaining data, it assigns the signed ranks: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, −12.This has a one-sided p-value of 55 / 2 12 {\displaystyle 55/2^{12}} , and therefore the sample is not significantly positive at any significance level α < 55 / 2 12 ≈ 0.0134 {\displaystyle \alpha <55/2^{12}\approx 0.0134} . Pratt argues that one would expect that decreasing the observations should certainly not make the data appear more positive.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose you have a search engine that retrieves the top 100 documents and achieves 90% precision and 20% recall. You modify the search engine to retrieve the top 200 and mysteriously, the precision stays the same. Which one is CORRECT?
Recall measures the quantity of relevant results returned by a search, while precision is the measure of the quality of the results returned. Recall is the ratio of relevant results returned to all relevant results. Precision is the ratio of the number of relevant results returned to the total number of results returned. The diagram at right represents a low-precision, low-recall search.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Suppose you have a search engine that retrieves the top 100 documents and achieves 90% precision and 20% recall. You modify the search engine to retrieve the top 200 and mysteriously, the precision stays the same. Which one is CORRECT?
For modern (web-scale) information retrieval, recall is no longer a meaningful metric, as many queries have thousands of relevant documents, and few users will be interested in reading all of them. Precision at k documents (P@k) is still a useful metric (e.g., P@10 or "Precision at 10" corresponds to the number of relevant results among the top 10 retrieved documents), but fails to take into account the positions of the relevant documents among the top k. Another shortcoming is that on a query with fewer relevant results than k, even a perfect system will have a score less than 1. It is easier to score manually since only the top k results need to be examined to determine if they are relevant or not.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the χ2 statistics for a binary feature, we obtain P(χ2 | DF = 1) > 0.05. This means in this case, it is assumed:
χ 2 {\textstyle \chi ^{2}} values vs p {\displaystyle {\boldsymbol {p}}} -values The p-value is the probability of observing a test statistic at least as extreme in a chi-squared distribution. Accordingly, since the cumulative distribution function (CDF) for the appropriate degrees of freedom (df) gives the probability of having obtained a value less extreme than this point, subtracting the CDF value from 1 gives the p-value. A low p-value, below the chosen significance level, indicates statistical significance, i.e., sufficient evidence to reject the null hypothesis. A significance level of 0.05 is often used as the cutoff between significant and non-significant results. The table below gives a number of p-values matching to χ 2 {\displaystyle \chi ^{2}} for the first 10 degrees of freedom. These values can be calculated evaluating the quantile function (also known as "inverse CDF" or "ICDF") of the chi-squared distribution; e. g., the χ2 ICDF for p = 0.05 and df = 7 yields 2.1673 ≈ 2.17 as in the table above, noticing that 1 – p is the p-value from the table.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
In the χ2 statistics for a binary feature, we obtain P(χ2 | DF = 1) > 0.05. This means in this case, it is assumed:
Because of this, one should expect the statistic to assume low values if x ¯ ≈ μ {\displaystyle {\overline {\mathbf {x} }}\approx {\boldsymbol {\mu }}} , and high values if they are different. From the distribution, t 2 ∼ T p , n − 1 2 = p ( n − 1 ) n − p F p , n − p , {\displaystyle t^{2}\sim T_{p,n-1}^{2}={\frac {p(n-1)}{n-p}}F_{p,n-p},} where F p , n − p {\displaystyle F_{p,n-p}} is the F-distribution with parameters p and n − p. In order to calculate a p-value (unrelated to p variable here), note that the distribution of t 2 {\displaystyle t^{2}} equivalently implies that n − p p ( n − 1 ) t 2 ∼ F p , n − p .
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement a Rocchio classifier
In machine learning, a nearest centroid classifier or nearest prototype classifier is a classification model that assigns to observations the label of the class of training samples whose mean (centroid) is closest to the observation. When applied to text classification using word vectors containing tf*idf weights to represent documents, the nearest centroid classifier is known as the Rocchio classifier because of its similarity to the Rocchio algorithm for relevance feedback.An extended version of the nearest centroid classifier has found applications in the medical domain, specifically classification of tumors.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Implement a Rocchio classifier
The Rocchio algorithm is based on a method of relevance feedback found in information retrieval systems which stemmed from the SMART Information Retrieval System developed between 1960 and 1964. Like many other retrieval systems, the Rocchio algorithm was developed using the vector space model. Its underlying assumption is that most users have a general conception of which documents should be denoted as relevant or irrelevant. Therefore, the user's search query is revised to include an arbitrary percentage of relevant and irrelevant documents as a means of increasing the search engine's recall, and possibly the precision as well. The number of relevant and irrelevant documents allowed to enter a query is dictated by the weights of the a, b, c variables listed below in the Algorithm section.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is correct regarding the use of Hidden Markov Models (HMMs) for entity recognition in text documents?
For example, in a visible Markov model the word "the" should predict with accuracy the following word, while in a hidden Markov model, the entire prior text implies the actual state and predicts the following words, but does not actually guarantee that state or prediction. Since the latter case is what's encountered in spam filtering, hidden Markov models are almost always used. In particular, because of storage limitations, the specific type of hidden Markov model called a Markov random field is particularly applicable, usually with a clique size of between four and six tokens.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is correct regarding the use of Hidden Markov Models (HMMs) for entity recognition in text documents?
Processing these “implicit parts” to achieve eventual word identification requires specific statistical procedures involving Hidden Markov Models (HMM). A Markov model is a statistical representation of a random process, which is to say a process in which future states are independent of states occurring before the present. In such a process, a given state is dependent only on the conditional probability of its following the state immediately before it.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
10 itemsets out of 100 contain item A, of which 5 also contain B. The rule A -> B has:
An item with a dot before a nonterminal, such as E → E + • B, indicates that the parser expects to parse the nonterminal B next. To ensure the item set contains all possible rules the parser may be in the midst of parsing, it must include all items describing how B itself will be parsed. This means that if there are rules such as B → 1 and B → 0 then the item set must also include the items B → • 1 and B → • 0. In general this can be formulated as follows: If there is an item of the form A → v • Bw in an item set and in the grammar there is a rule of the form B → w' then the item B → • w' should also be in the item set.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
10 itemsets out of 100 contain item A, of which 5 also contain B. The rule A -> B has:
The construction of these parsing tables is based on the notion of LR(0) items (simply called items here) which are grammar rules with a special dot added somewhere in the right-hand side. For example, the rule E → E + B has the following four corresponding items: E → • E + B E → E • + B E → E + • B E → E + B •Rules of the form A → ε have only a single item A → •. The item E → E • + B, for example, indicates that the parser has recognized a string corresponding with E on the input stream and now expects to read a '+' followed by another string corresponding with B.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is correct regarding the use of Hidden Markov Models (HMMs) for entity recognition in text documents?
For example, in a visible Markov model the word "the" should predict with accuracy the following word, while in a hidden Markov model, the entire prior text implies the actual state and predicts the following words, but does not actually guarantee that state or prediction. Since the latter case is what's encountered in spam filtering, hidden Markov models are almost always used. In particular, because of storage limitations, the specific type of hidden Markov model called a Markov random field is particularly applicable, usually with a clique size of between four and six tokens.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following is correct regarding the use of Hidden Markov Models (HMMs) for entity recognition in text documents?
Processing these “implicit parts” to achieve eventual word identification requires specific statistical procedures involving Hidden Markov Models (HMM). A Markov model is a statistical representation of a random process, which is to say a process in which future states are independent of states occurring before the present. In such a process, a given state is dependent only on the conditional probability of its following the state immediately before it.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A basic statement in RDF would be expressed in the relational data model by a table
The flat (or table) model consists of a single, two-dimensional array of data elements, where all members of a given column are assumed to be similar values, and all members of a row are assumed to be related to one another. For instance, columns for name and password that might be used as a part of a system security database. Each row would have the specific password associated with an individual user. Columns of the table often have a type associated with them, defining them as character data, date or time information, integers, or floating point numbers. This tabular format is a precursor to the relational model.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
A basic statement in RDF would be expressed in the relational data model by a table
The fundamental assumption behind a relational model is that all data is represented as mathematical n-ary relations, an n-ary relation being a subset of the Cartesian product of n domains. In the mathematical model, reasoning about such data is done in two-valued predicate logic, meaning there are two possible evaluations for each proposition: either true or false (and in particular no third value such as unknown, or not applicable, either of which are often associated with the concept of NULL). Data are operated upon by means of a relational calculus or relational algebra, these being equivalent in expressive power. The relational model of data permits the database designer to create a consistent, logical representation of information.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements is wrong regarding RDF?
There is very little variation in how an RDF graph can be represented in N-Triples. This makes it a very convenient format to provide "model answers" for RDF test suites.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
Which of the following statements is wrong regarding RDF?
An RDF graph statement is represented by: 1) a node for the subject, 2) an arc that goes from a subject to an object for the predicate, and 3) a node for the object. Each of the three parts of the statement can be identified by a URI. An object can also be a literal value.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The number of non-zero entries in a column of a term-document matrix indicates:
When creating a data-set of terms that appear in a corpus of documents, the document-term matrix contains rows corresponding to the documents and columns corresponding to the terms. Each ij cell, then, is the number of times word j occurs in document i. As such, each row is a vector of term counts that represents the content of the document corresponding to that row. For instance if one has the following two (short) documents: D1 = "I like databases" D2 = "I dislike databases",then the document-term matrix would be: which shows which documents contain which terms and how many times they appear.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
The number of non-zero entries in a column of a term-document matrix indicates:
In text databases, a document collection defined by a document by term D matrix (of size m×n, where m is the number of documents and n is the number of terms), the number of clusters can roughly be estimated by the formula m n t {\displaystyle {\tfrac {mn}{t}}} where t is the number of non-zero entries in D. Note that in D each row and each column must contain at least one non-zero element.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is TRUE regarding Fagin's algorithm?
Fagin's theorem is the oldest result of descriptive complexity theory, a branch of computational complexity theory that characterizes complexity classes in terms of logic-based descriptions of their problems rather than by the behavior of algorithms for solving those problems. The theorem states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP. It was proven by Ronald Fagin in 1973 in his doctoral thesis, and appears in his 1974 paper. The arity required by the second-order formula was improved (in one direction) in Lynch (1981), and several results of Grandjean have provided tighter bounds on nondeterministic random-access machines.
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus
What is TRUE regarding Fagin's algorithm?
Fagin's theorem is a result in descriptive complexity theory that states that the set of all properties expressible in existential second-order logic is precisely the complexity class NP. It is remarkable since it is a characterization of the class NP that does not invoke a model of computation such as a Turing machine. The theorem was proven by Ronald Fagin in 1974 (strictly, in 1973 in his doctoral thesis). As a corollary, Jones and Selman showed that a set is a spectrum if and only if it is in the complexity class NEXP.One direction of the proof is to show that, for every first-order formula φ {\displaystyle \varphi } , the problem of determining whether there is a model of the formula of cardinality n is equivalent to the problem of satisfying a formula of size polynomial in n, which is in NP(n) and thus in NEXP of the input to the problem (the number n in binary form, which is a string of size log(n)).
https://www.kaggle.com/datasets/conjuring92/wiki-stem-corpus