_id
stringlengths
2
6
text
stringlengths
3
395
title
stringclasses
1 value
C3200
Clustering is the task of dividing the population or data points into a number of groups such that data points in the same groups are more similar to other data points in the same group than those in other groups. In simple words, the aim is to segregate groups with similar traits and assign them into clusters.
C3201
The Cramér-Rao Inequality provides a lower bound for the variance of an unbiased estimator of a parameter. It allows us to conclude that an unbiased estimator is a minimum variance unbiased estimator for a parameter.
C3202
The F-statistic is the test statistic for F-tests. In general, an F-statistic is a ratio of two quantities that are expected to be roughly equal under the null hypothesis, which produces an F-statistic of approximately 1. In order to reject the null hypothesis that the group means are equal, we need a high F-value.
C3203
AI is a bigger concept to create intelligent machines that can simulate human thinking capability and behavior, whereas, machine learning is an application or subset of AI that allows machines to learn from data without being programmed explicitly.
C3204
Cluster analysis can be a powerful data-mining tool for any organisation that needs to identify discrete groups of customers, sales transactions, or other types of behaviors and things. For example, insurance providers use cluster analysis to detect fraudulent claims, and banks use it for credit scoring.
C3205
In image processing, a Gaussian blur (also known as Gaussian smoothing) is the result of blurring an image by a Gaussian function (named after mathematician and scientist Carl Friedrich Gauss). It is a widely used effect in graphics software, typically to reduce image noise and reduce detail.
C3206
Amortized VI is the idea that instead of optimizing a set of free parameters, we can introduce a parameterized function that maps from observation space to the parameters of the approximate posterior distribution.
C3207
1 degree
C3208
An invertible matrix is a square matrix that has an inverse. We say that a square matrix is invertible if and only if the determinant is not equal to zero. In other words, a 2 x 2 matrix is only invertible if the determinant of the matrix is not 0.
C3209
A hierarchical linear regression is a special form of a multiple linear regression analysis in which more variables are added to the model in separate steps called “blocks.” This is often done to statistically “control” for certain variables, to see whether adding variables significantly improves a model's ability to
C3210
A simple definition of a sampling frame is the set of source materials from which the sample is selected. The definition also encompasses the purpose of sampling frames, which is to provide a means for choosing the particular members of the target population that are to be interviewed in the survey.
C3211
List of Common Machine Learning AlgorithmsLinear Regression.Logistic Regression.Decision Tree.SVM.Naive Bayes.kNN.K-Means.Random Forest.More items•
C3212
The standard deviation is simply the square root of the variance. The average deviation, also called the mean absolute deviation , is another measure of variability. However, average deviation utilizes absolute values instead of squares to circumvent the issue of negative differences between data and the mean.
C3213
The sample variance is not always smaller than the population variance.
C3214
The Gamma distribution can be thought of as a generalization of the Chi-square distribution. If a random variable has a Chi-square distribution with degrees of freedom and is a strictly positive constant, then the random variable defined as has a Gamma distribution with parameters and .
C3215
The formula for calculating a z-score is is z = (x-μ)/σ, where x is the raw score, μ is the population mean, and σ is the population standard deviation. As the formula shows, the z-score is simply the raw score minus the population mean, divided by the population standard deviation. Figure 2.
C3216
The degree of freedom defines as the capability of a body to move. Consider a rectangular box, in space the box is capable of moving in twelve different directions (six rotational and six axial). Each direction of movement is counted as one degree of freedom. i.e. a body in space has twelve degree of freedom.
C3217
The presence of serial correlation can be detected by the Durbin-Watson test and by plotting the residuals against their lags. The subscript t represents the time period.
C3218
In my opinion the LVM partition is more usefull cause then after installation you can later change partition sizes and number of partitions easily. In standard partition also you can do resizing, but total number of physical partitions are limited to 4. With LVM you have much greater flexibility.
C3219
No, the same values are reported. A researcher computes a one-sample z test in two studies. Both studies used the same alpha level, placed the rejection region in both tails, and measured the same sample mean.
C3220
Action selection in AI systems is a basic system in which the problem can be analyzed by the AI machine to understand what it has to do next to get closer to the solution of the problem. AI agents and action selection form to be very important entities to help devise an intelligent solution to a problem.
C3221
Machine learning is more than neural networks and deep learning. It is a field with a legion of smart algorithms that deduce complex patterns and make predictions about the unknown. The robustness of Random forests is contributed to its collection of distinct decision trees, each trying to solve part of the problem.
C3222
Particle filters or Sequential Monte Carlo (SMC) methods are a set of Monte Carlo algorithms used to solve filtering problems arising in signal processing and Bayesian statistical inference. Particle filters update their prediction in an approximate (statistical) manner.
C3223
In mathematics, the empty set is the unique set having no elements; its size or cardinality (count of elements in a set) is zero.
C3224
The Mann-Whitney U test is used to compare differences between two independent groups when the dependent variable is either ordinal or continuous, but not normally distributed. The Mann-Whitney U test is often considered the nonparametric alternative to the independent t-test although this is not always the case.
C3225
Logistic regression is easier to implement, interpret, and very efficient to train. If the number of observations is lesser than the number of features, Logistic Regression should not be used, otherwise, it may lead to overfitting. It makes no assumptions about distributions of classes in feature space.
C3226
Bounding boxes is one of the most popular and recognizable image annotation method used in machine learning and deep learning. Using bounding boxes annotators are asked to outline the object in a box as per the machine learning project requirements.
C3227
Backward elimination, which involves starting with all candidate variables, testing the deletion of each variable using a chosen model fit criterion, deleting the variable (if any) whose loss gives the most statistically insignificant deterioration of the model fit, and repeating this process until no further variables
C3228
Interpolation search works better than Binary Search for a sorted and uniformly distributed array. On average the interpolation search makes about log(log(n)) comparisons (if the elements are uniformly distributed), where n is the number of elements to be searched.
C3229
A) Simple regression uses more than one dependent and independent variables, whereas multiple regression uses only one dependent and independent variable.
C3230
Reinforcement learning is the training of machine learning models to make a sequence of decisions. The agent learns to achieve a goal in an uncertain, potentially complex environment. In reinforcement learning, an artificial intelligence faces a game-like situation. Its goal is to maximize the total reward.
C3231
R squared, the proportion of variation in the outcome Y, explained by the covariates X, is commonly described as a measure of goodness of fit. This of course seems very reasonable, since R squared measures how close the observed Y values are to the predicted (fitted) values from the model.
C3232
r text-mining natural-language. According the documentation of the removeSparseTerms function from the tm package, this is what sparsity entails: A term-document matrix where those terms from x are removed which have at least a sparse percentage of empty (i.e., terms occurring 0 times in a document) elements.
C3233
Let's explore 5 common techniques used for extracting information from the above text.Named Entity Recognition. The most basic and useful technique in NLP is extracting the entities in the text. Sentiment Analysis. Text Summarization. Aspect Mining. Topic Modeling.
C3234
Cross-sectional data, or a cross section of a study population, in statistics and econometrics is a type of data collected by observing many subjects (such as individuals, firms, countries, or regions) at the one point or period of time. The analysis might also have no regard to differences in time.
C3235
Uncertainty, means simply that there is a lack of certainty, caused by some information being hidden.
C3236
Under simple random sampling, a sample of items is chosen randomly from a population, and each item has an equal probability of being chosen. Meanwhile, systematic sampling involves selecting items from an ordered population using a skip or sampling interval.
C3237
Batch size is a term used in machine learning and refers to the number of training examples utilized in one iteration. The batch size can be one of three options: batch mode: where the batch size is equal to the total dataset thus making the iteration and epoch values equivalent.
C3238
Random forest has nearly the same hyperparameters as a decision tree or a bagging classifier. Random forest adds additional randomness to the model, while growing the trees. Instead of searching for the most important feature while splitting a node, it searches for the best feature among a random subset of features.
C3239
The Bag-of-words model is an orderless document representation — only the counts of words matter. For instance, in the above example "John likes to watch movies. Mary likes movies too", the bag-of-words representation will not reveal that the verb "likes" always follows a person's name in this text.
C3240
The definition of data misuse is pretty simple: using information in a way it wasn't intended to be used. The most common reasons for misuse are lack of awareness, personal gain, silent data collection, and using trade secrets in order to start a new business. In some cases, misuse can lead to a data breach.
C3241
Simply put, homoscedasticity means “having the same scatter.” For it to exist in a set of data, the points must be about the same distance from the line, as shown in the picture above. The opposite is heteroscedasticity (“different scatter”), where points are at widely varying distances from the regression line.
C3242
A number of Machine Learning Algorithms - Supervised or Unsupervised, use Distance Metrics to know the input data pattern in order to make any Data Based decision. A good distance metric helps in improving the performance of Classification, Clustering and Information Retrieval process significantly.
C3243
Advantages. The coefficient of variation is useful because the standard deviation of data must always be understood in the context of the mean of the data. In contrast, the actual value of the CV is independent of the unit in which the measurement has been taken, so it is a dimensionless number.
C3244
Hyperparameters are the variables which determines the network structure(Eg: Number of Hidden Units) and the variables which determine how the network is trained(Eg: Learning Rate). Hyperparameters are set before training(before optimizing the weights and bias).
C3245
We often divide the distribution at 99 centiles or percentiles . The median is thus the 50th centile. For the 20th centile of FEV1, i =0.2 times 58 = 11.6, so the quantile is between the 11th and 12th observation, 3.42 and 3.48, and can be estimated by 3.42 + (3.48 - 3.42) times (11.6 - 11) = 3.46.
C3246
R is a highly extensible and easy to learn language and fosters an environment for statistical computing and graphics. All of this makes R an ideal choice for data science, big data analysis, and machine learning.
C3247
Decision Trees are a type of Supervised Machine Learning (that is you explain what the input is and what the corresponding output is in the training data) where the data is continuously split according to a certain parameter. An example of a decision tree can be explained using above binary tree.
C3248
Cluster Sampling: Advantages and Disadvantages Assuming the sample size is constant across sampling methods, cluster sampling generally provides less precision than either simple random sampling or stratified sampling. This is the main disadvantage of cluster sampling.
C3249
The anti-Martingale, or reverse Martingale, system is a trading methodology that involves halving a bet each time there is a trade loss and doubling it each time there is a gain. This technique is the opposite of the Martingale system, whereby a trader (or gambler) doubles down on a losing bet and halves a winning bet.
C3250
The determinant is a unique number associated with a square matrix. If the determinant of a matrix is equal to zero: The matrix is less than full rank. The matrix is singular.
C3251
The important limitations of statistics are: (1) Statistics laws are true on average. Statistics are aggregates of facts, so a single observation is not a statistic. (2) Statistical methods are best applicable to quantitative data. (3) Statistics cannot be applied to heterogeneous data.
C3252
The AUC for the ROC can be calculated using the roc_auc_score() function. Like the roc_curve() function, the AUC function takes both the true outcomes (0,1) from the test set and the predicted probabilities for the 1 class. It returns the AUC score between 0.0 and 1.0 for no skill and perfect skill respectively.
C3253
Now living under the identity of Scarecrow, Hide helped Koutarou Amon flee from Akihiro Kanou after he was turned into a one-eyed ghoul.
C3254
The consistency of the sampling distribution is dependent on the sample size not on the distribution of the population. As the sample size decreases the absolute value of the skewness and kurtosis of the sampling distribution increases. This sample size relationship is expressed in the central limit theorem.
C3255
The major difference between machine learning and statistics is their purpose. Machine learning models are designed to make the most accurate predictions possible. Statistical models are designed for inference about the relationships between variables.
C3256
You'll get the same answer, but the technical difference is glm uses likelihood (if you want AIC values) whereas lm uses least squares. Consequently lm is faster, but you can't do as much with it.
C3257
If X takes values in [a, b] and Y takes values in [c, d] then the pair (X, Y ) takes values in the product [a, b] × [c, d]. The joint probability density function (joint pdf) of X and Y is a function f(x, y) giving the probability density at (x, y).
C3258
A decision tree is a flowchart-like tree structure where an internal node represents feature(or attribute), the branch represents a decision rule, and each leaf node represents the outcome. The topmost node in a decision tree is known as the root node. It learns to partition on the basis of the attribute value.
C3259
The population mean of the distribution of sample means is the same as the population mean of the distribution being sampled from. Thus as the sample size increases, the standard deviation of the means decreases; and as the sample size decreases, the standard deviation of the sample means increases.
C3260
Maximum likelihood estimation involves defining a likelihood function for calculating the conditional probability of observing the data sample given a probability distribution and distribution parameters. This approach can be used to search a space of possible distributions and parameters.
C3261
An artificial neural network is an attempt to simulate the network of neurons that make up a human brain so that the computer will be able to learn things and make decisions in a humanlike manner. ANNs are created by programming regular computers to behave as though they are interconnected brain cells.
C3262
In a mechanical system, energy is dissipated when two surfaces rub together. Work is done against friction which causes heating of the two surfaces – so the internal (thermal) energy store of the surfaces increases and this is then transferred to the internal energy store of the surroundings.
C3263
GAN models can suffer badly in the following areas comparing to other deep networks. Non-convergence: the models do not converge and worse they become unstable. Slow training: the gradient to train the generator vanished.
C3264
A standard deviation is a measure of variability for a distribution of scores in a single sample or in a population of scores. A standard error is the standard deviation in a distribution of means of all possible samples of a given size from a particular population of individual scores.
C3265
ReLU is differentiable at all the point except 0. the left derivative at z = 0 is 0 and the right derivative is 1. Hidden units that are not differentiable are usually non-differentiable at only a small number of points.
C3266
The questionable cause—also known as causal fallacy, false cause, or non causa pro causa ("non-cause for cause" in Latin)—is a category of informal fallacies in which a cause is incorrectly identified. For example: "Every time I go to sleep, the sun goes down.
C3267
Neural network momentum is a simple technique that often improves both training speed and accuracy. Training a neural network is the process of finding values for the weights and biases so that for a given set of input values, the computed output values closely match the known, correct, target values.
C3268
Centroid-based clustering organizes the data into non-hierarchical clusters, in contrast to hierarchical clustering defined below. k-means is the most widely-used centroid-based clustering algorithm. Centroid-based algorithms are efficient but sensitive to initial conditions and outliers.
C3269
A simple definition of a sampling frame is the set of source materials from which the sample is selected. The definition also encompasses the purpose of sampling frames, which is to provide a means for choosing the particular members of the target population that are to be interviewed in the survey.
C3270
The Random Variable is X = "The sum of the scores on the two dice". Let's count how often each value occurs, and work out the probabilities: 2 occurs just once, so P(X = 2) = 1/36. 3 occurs twice, so P(X = 3) = 2/36 = 1/18.
C3271
The integral sign ∫ represents integration. The symbol dx, called the differential of the variable x, indicates that the variable of integration is x. The function f(x) to be integrated is called the integrand.
C3272
Here's how we can do it.Step 1: Choose the number of clusters k. Step 2: Select k random points from the data as centroids. Step 3: Assign all the points to the closest cluster centroid. Step 4: Recompute the centroids of newly formed clusters. Step 5: Repeat steps 3 and 4.
C3273
Some variables, such as social security numbers and zip codes, take numerical values, but are not quantitative: They are qualitative or categorical variables. The sum of two zip codes or social security numbers is not meaningful. The average of a list of zip codes is not meaningful.
C3274
String Interpolation is a one-way databinding technique which is used to output the data from a TypeScript code to HTML template (view). It uses the template expression in double curly braces to display the data from the component to the view.
C3275
Hypothesis Tests with the Repeated-Measures t (cont.) In words, the null hypothesis says that there is no consistent or systematic difference between the two treatment conditions. Note that the null hypothesis does not say that each individual will have a difference score equal to zero.
C3276
Bootstrapping is a type of resampling where large numbers of smaller samples of the same size are repeatedly drawn, with replacement, from a single original sample. You randomly draw three numbers 5, 1, and 49. You then replace those numbers into the sample and draw three numbers again.
C3277
In mathematical optimization and decision theory, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event.
C3278
Deep Neural Networks struggle with the vanishing gradient problem because of the way back propagation is done by calculating an error value for each neuron, starting with the output layer working it's way back to the input layer. Back-propagation then uses the chain rule to calculate the gradient for each neuron.
C3279
Edward Lorenz, from the Massachusetts Institute of Technology (MIT) is the official discoverer of chaos theory. Lorenz had rediscovered the chaotic behavior of a nonlinear system, that of the weather, but the term chaos theory was only later given to the phenomenon by the mathematician James A. Yorke, in 1975.
C3280
It is simply not possible to use the k-means clustering over categorical data because you need a distance between elements and that is not clear with categorical data as it is with the numerical part of your data.
C3281
Predicting Google's Stock Price using Linear RegressionTake a value of x (say x=0)Find the corresponding value of y by putting x=0 in the equation.Store the (x,y) value pair in a table.Repeat the process once or twice or as many times as we want.Plot the points on the graph to obtain the straight line.
C3282
ReLU is important because it does not saturate; the gradient is always high (equal to 1) if the neuron activates. As long as it is not a dead neuron, successive updates are fairly effective. ReLU is also very quick to evaluate.
C3283
The correlation of X and Y is the normalized covariance: Corr(X,Y) = Cov(X,Y) / σXσY . (Notice that the covariance of X with itself is Var(X), and therefore the correlation of X with itself is 1.) Correlation is a measure of the strength of the linear relationship between two variables.
C3284
In the nonparametric bootstrap a sample of the same size as the data is take from the data with replacement. What does this mean? It means that if you measure 10 samples, you create a new sample of size 10 by replicating some of the samples that you've already seen and omitting others.
C3285
In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can take on one of K possible categories, with the probability of each
C3286
The machine learning perspective on the Ising model The Ising model is an undirected graphical model or Markov random field. These random variables are the spins of the Ising model, so two nodes are connected by an edge if they interact.
C3287
The Delta rule in machine learning and neural network environments is a specific type of backpropagation that helps to refine connectionist ML/AI networks, making connections between inputs and outputs with layers of artificial neurons. The Delta rule is also known as the Delta learning rule.
C3288
Because it arises from consistency between parts of a test, split-half reliability is an “internal consistency” approach to estimating reliability. This result is an estimate of the reliability of the test scores, and it provides some support for the quality of the test scores.
C3289
If there are only two variables, one is continuous and another one is categorical, theoretically, it would be difficult to capture the correlation between these two variables.
C3290
As the name stepwise regression suggests, this procedure selects variables in a step-by-step manner. The procedure adds or removes independent variables one at a time using the variable's statistical significance. Stepwise either adds the most significant variable or removes the least significant variable.
C3291
Bellman's principle of optimality Principle of Optimality: An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision.
C3292
When we think of data structures, there are generally four forms:Linear: arrays, lists.Tree: binary, heaps, space partitioning etc.Hash: distributed hash table, hash tree etc.Graphs: decision, directed, acyclic etc.
C3293
The Gaussian Processes Classifier is a classification machine learning algorithm. Gaussian Processes are a generalization of the Gaussian probability distribution and can be used as the basis for sophisticated non-parametric machine learning algorithms for classification and regression.
C3294
Batch normalization may be used on the inputs to the layer before or after the activation function in the previous layer. It may be more appropriate after the activation function if for s-shaped functions like the hyperbolic tangent and logistic function.
C3295
Sampling errors can be reduced by the following methods: (1) by increasing the size of the sample (2) by stratification. Increasing the size of the sample: The sampling error can be reduced by increasing the sample size. If the sample size n is equal to the population size N, then the sampling error is zero.
C3296
Batch size is a term used in machine learning and refers to the number of training examples utilized in one iteration. Usually, a number that can be divided into the total dataset size. stochastic mode: where the batch size is equal to one.
C3297
K-nearest neighbors K- nearest neighbor (kNN) is a simple supervised machine learning algorithm that can be used to solve both classification and regression problems. kNN stores available inputs and classifies new inputs based on a similar measure i.e. the distance function.
C3298
Regular Markov Chains. ○ A transition matrix P is regular if some power of P has only positive entries. A Markov chain is a regular Markov chain if its transition matrix is regular. For example, if you take successive powers of the matrix D, the entries of D will always be positive (or so it appears).
C3299
The primary advantage of CRFs over hidden Markov models is their conditional nature, resulting in the relaxation of the independence assumptions required by HMMs in order to ensure tractable inference.