_id
stringlengths
2
6
text
stringlengths
3
395
title
stringclasses
1 value
C9100
Decision trees provide an effective method of Decision Making because they: Clearly lay out the problem so that all options can be challenged. Allow us to analyze fully the possible consequences of a decision. Provide a framework to quantify the values of outcomes and the probabilities of achieving them.
C9101
A representative sample is a subset of a population that seeks to accurately reflect the characteristics of the larger group. For example, a classroom of 30 students with 15 males and 15 females could generate a representative sample that might include six students: three males and three females.
C9102
A t score is one form of a standardized test statistic (the other you'll come across in elementary statistics is the z-score). The t score formula enables you to take an individual score and transform it into a standardized form>one which helps you to compare scores.
C9103
Recall is the number of relevant documents retrieved by a search divided by the total number of existing relevant documents, while precision is the number of relevant documents retrieved by a search divided by the total number of documents retrieved by that search.
C9104
Let X be a discrete random variable with a geometric distribution with parameter p for some 0<p≤1. Then the moment generating function MX of X is given by: MX(t)=p1−(1−p)et.
C9105
Authors sometimes calculate the difference between the highest and the lowest range value and report it as one estimate of the spread, most commonly for interquartile range (4). For example, instead reporting values of 34 (30–39) for median and interquartile range, one can report 34 (9).
C9106
(Example: a test with 90% specificity will correctly return a negative result for 90% of people who don't have the disease, but will return a positive result — a false-positive — for 10% of the people who don't have the disease and should have tested negative.)
C9107
Collective intelligence (CI) is shared or group intelligence that emerges from the collaboration, collective efforts, and competition of many individuals and appears in consensus decision making.
C9108
Machine Learning on Code (MLonCode) is a new interdisciplinary field of research related to Natural Language Processing, Programming Language Structure, and Social and History analysis such contributions graphs and commit time series.
C9109
Stacking, also known as stacked generalization, is an ensemble method where the models are combined using another machine learning algorithm. The basic idea is to train machine learning algorithms with training dataset and then generate a new dataset with these models.
C9110
A good property of conditional entropy is that if we know H(Y|X)=0, then Y=f(X) for a function f. To see another interest behind the conditional entropy, suppose that Y is an estimation of X and we are interested in probability of error Pe. If for Y=y, we can estimate X without error then H(Y|Y=y)=0.
C9111
It has become the default activation function for many types of neural networks because a model that uses it is easier to train and often achieves better performance. The rectified linear activation function overcomes the vanishing gradient problem, allowing models to learn faster and perform better.
C9112
Neural Networks are essentially a part of Deep Learning, which in turn is a subset of Machine Learning. So, Neural Networks are nothing but a highly advanced application of Machine Learning that is now finding applications in many fields of interest.
C9113
The Kruskal Wallis H test uses ranks instead of actual data. It is sometimes called the one-way ANOVA on ranks, as the ranks of the data values are used in the test rather than the actual data points. The test determines whether the medians of two or more groups are different.
C9114
The regression slope intercept formula, b0 = y – b1 * x is really just an algebraic variation of the regression equation, y' = b0 + b1x where “b0” is the y-intercept and b1x is the slope. Once you've found the linear regression equation, all that's required is a little algebra to find the y-intercept (or the slope).
C9115
Tokens are the smallest elements of a program, which are meaningful to the compiler. The following are the types of tokens: Keywords, Identifiers, Constant, Strings, Operators, etc.
C9116
The first postulate of statistical mechanics � This postulate is often called the principle of equal a priori probabilities. It says that if the microstates have the same energy, volume, and number of particles, then they occur with equal frequency in the ensemble.
C9117
7 Top Linear Algebra Resources For Machine Learning BeginnersEssence Of Linear Algebra By 3Blue1Brown.Linear Algebra By Khan Academy.Basic Linear Algebra for Deep Learning By Niklas Donges.Computational Linear Algebra for Coders By fast.ai.Deep Learning Book By Ian Goodfellow and Yoshua Bengio and Aaron Courville.Linear Algebra for Machine Learning By AppliedAICourse.More items•
C9118
Restricted Boltzmann Machines are shallow, two-layer neural nets that constitute the building blocks of deep-belief networks. The first layer of the RBM is called the visible, or input layer, and the second is the hidden layer. Each circle represents a neuron-like unit called a node.
C9119
Hierarchical Task AnalysisDEFINE TASK BEING ANALYZED, as well as the purpose of the task analysis.CONDUCT DATA COLLECTION. DETERMINE THE OVERALL GOAL OF THE TASK. DETERMINE TASK SUB-GOALS. PERFORM SUB-GOAL DECOMPOSITION. DEVELOP PLANS ANALYSIS.
C9120
Deep Learning is a part of Machine Learning which is applied to larger data-sets and based on ANN (Artificial Neural Networks). The main technology used in NLP (Natural Language Processing) which mainly focuses on teaching natural/human language to computers. NLP is a part of AI which overlaps with ML & DL.
C9121
The sign test is a statistical method to test for consistent differences between pairs of observations, such as the weight of subjects before and after treatment. The sign test can also test if the median of a collection of numbers is significantly greater than or less than a specified value.
C9122
The standard normal distribution is a normal distribution with a mean of zero and standard deviation of 1. The standard normal distribution is centered at zero and the degree to which a given measurement deviates from the mean is given by the standard deviation.
C9123
Supervised learning is the machine learning task of learning a function that maps an input to an output based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples.
C9124
Factor analysis is a technique that is used to reduce a large number of variables into fewer numbers of factors. This technique extracts maximum common variance from all variables and puts them into a common score. As an index of all variables, we can use this score for further analysis.
C9125
Another way to describe the imbalance of classes in a dataset is to summarize the class distribution as percentages of the training dataset. For example, an imbalanced multiclass classification problem may have 80 percent examples in the first class, 18 percent in the second class, and 2 percent in a third class.
C9126
Definition of the loss The goal of the triplet loss is to make sure that: Two examples with the same label have their embeddings close together in the embedding space. Two examples with different labels have their embeddings far away.
C9127
At equilibrium, the change in entropy is zero, i.e., ΔS=0 (at equilibrium).
C9128
Max Pooling is a convolution process where the Kernel extracts the maximum value of the area it convolves. Max Pooling simply says to the Convolutional Neural Network that we will carry forward only that information, if that is the largest information available amplitude wise.
C9129
If two events have no elements in common (Their intersection is the empty set.), the events are called mutually exclusive. Thus, P(A∩B)=0 . This means that the probability of event A and event B happening is zero. They cannot both happen.
C9130
The normal approximation gives us a very poor result without the continuity correction. We make a continuity correction when p is > 0.5.
C9131
Python is easy to learn and work with, and provides convenient ways to express how high-level abstractions can be coupled together. Nodes and tensors in TensorFlow are Python objects, and TensorFlow applications are themselves Python applications. The actual math operations, however, are not performed in Python.
C9132
and the definition of unbiased estimator corresponds to the fact that the above integral should be equal to the parameter θ of the underlying distribution. The sample mean and variance are consistent and unbiased esti- mators of the mean and variance of the underlying distribution.
C9133
A goodness-of-fit test, in general, refers to measuring how well do the observed data correspond to the fitted (assumed) model. Like in a linear regression, in essence, the goodness-of-fit test compares the observed values to the expected (fitted or predicted) values.
C9134
1 Answer. For binary classification, it should give the same results, because softmax is a generalization of sigmoid for a larger number of classes.
C9135
A Likert Scale is a type of rating scale used to measure attitudes or opinions. With this scale, respondents are asked to rate items on a level of agreement. For example: Strongly agree. Agree.
C9136
Abnormal BRCA1 and BRCA2 genes are found in 5% to 10% of all breast cancer cases in the United States. A study found that women with an abnormal BRCA1 gene had a worse prognosis than women with an abnormal BRCA2 gene 5 years after diagnosis.
C9137
Stochastic Gradient Descent (SGD): Hence, in Stochastic Gradient Descent, a few samples are selected randomly instead of the whole data set for each iteration. This problem is solved by Stochastic Gradient Descent. In SGD, it uses only a single sample, i.e., a batch size of one, to perform each iteration.
C9138
Definition. A score that is derived from an individual's raw score within a distribution of scores. The standard score describes the difference of the raw score from a sample mean, expressed in standard deviations. Standard scores preserve the absolute differences between scores.
C9139
Any study that attempts to predict human behavior will tend to have R-squared values less than 50%. However, if you analyze a physical process and have very good measurements, you might expect R-squared values over 90%. There is no one-size fits all best answer for how high R-squared should be.
C9140
Negative coefficients indicate that the event is less likely at that level of the predictor than at the reference level. The coefficient is the estimated change in the natural log of the odds when you change from the reference level to the level of the coefficient.
C9141
Example 1: Fair Dice Roll The number of desired outcomes is 3 (rolling a 2, 4, or 6), and there are 6 outcomes in total. The a priori probability for this example is calculated as follows: A priori probability = 3 / 6 = 50%. Therefore, the a priori probability of rolling a 2, 4, or 6 is 50%.
C9142
List of Common Machine Learning AlgorithmsLinear Regression.Logistic Regression.Decision Tree.SVM.Naive Bayes.kNN.K-Means.Random Forest.More items•
C9143
This issue calls for the need of {Large-scale Machine Learning} (LML), which aims to learn patterns from big data with comparable performance efficiently.
C9144
Statistical analysts test a hypothesis by measuring and examining a random sample of the population being analyzed. All analysts use a random population sample to test two different hypotheses: the null hypothesis and the alternative hypothesis.
C9145
geometrical product specifications
C9146
Cross-entropy loss, or log loss, measures the performance of a classification model whose output is a probability value between 0 and 1. Cross-entropy loss increases as the predicted probability diverges from the actual label.
C9147
The t-distribution describes the standardized distances of sample means to the population mean when the population standard deviation is not known, and the observations come from a normally distributed population.
C9148
D refers to the number of differencing transformations required by the time series to get stationary. Differencing is a method of transforming a non-stationary time series into a stationary one. This is an important step in preparing data to be used in an ARIMA model.
C9149
Simple Linear RegressionLinearity: The relationship between X and the mean of Y is linear.Homoscedasticity: The variance of residual is the same for any value of X.Independence: Observations are independent of each other.Normality: For any fixed value of X, Y is normally distributed.
C9150
The "Linear-by-Linear" test is for ordinal (ordered) categories and assumes equal and ordered intervals. The Linear-by-Linear Association test is a test for trends in a larger-than-2x2 table. Its value is shown to be significant and indicates that income tends to rise with values of "male" (i.e., from 0 to 1).
C9151
Not all machine learning algorithms make the iid assumption (for example, decision tree based approaches do not). So, common learning algorithms can be used to learn time series data.
C9152
In statistics, the likelihood function (often simply called the likelihood) measures the goodness of fit of a statistical model to a sample of data for given values of the unknown parameters.
C9153
A Bayesian Neural Network (BNN) can then be defined as any stochastic artificial neural network. trained using Bayesian inference [54]. To design a BNN, the first step is the choice of a deep neural. network architecture, i.e., of a functional model.
C9154
Student's t-distribution and Snedecor-Fisher's F- distribution. These are two distributions used in statistical tests. The first one is commonly used to estimate the mean µ of a normal distribution when the variance σ2 is not known, a common situation.
C9155
A sampling distribution is the theoretical distribution of a sample statistic that would be obtained from a large number of random samples of equal size from a population. Consequently, the sampling distribution serves as a statistical “bridge” between a known sample and the unknown population.
C9156
The idea behind importance sampling is that certain values of the input random variables in a simulation have more impact on the parameter being estimated than others. If these "important" values are emphasized by sampling more frequently, then the estimator variance can be reduced.
C9157
If the sample being tested falls into either of the critical areas, the alternative hypothesis is accepted instead of the null hypothesis. The two-tailed test gets its name from testing the area under both tails of a normal distribution, although the test can be used in other non-normal distributions.
C9158
The gradients are the partial derivatives of the loss with respect to each of the six variables. TensorFlow presents the gradient and the variable of which it is the gradient, as members of a tuple inside a list. We display the shapes of each of the gradients and variables to check that is actually the case.
C9159
Classification model: A classification model tries to draw some conclusion from the input values given for training. It will predict the class labels/categories for the new data. Feature: A feature is an individual measurable property of a phenomenon being observed.
C9160
TensorFlow is more of a low-level library. Scikit-Learn is a higher-level library that includes implementations of several machine learning algorithms, so you can define a model object in a single line or a few lines of code, then use it to fit a set of points or predict a value.
C9161
The standard deviation of the sample mean ˉX that we have just computed is the standard deviation of the population divided by the square root of the sample size: √10=√20/√2.
C9162
If the sequence of estimates can be mathematically shown to converge in probability to the true value θ0, it is called a consistent estimator; otherwise the estimator is said to be inconsistent.
C9163
In null hypothesis testing, this criterion is called α (alpha) and is almost always set to . 05. If there is less than a 5% chance of a result as extreme as the sample result if the null hypothesis were true, then the null hypothesis is rejected. When this happens, the result is said to be statistically significant .
C9164
Skip connections are extra connections between nodes in different layers of a neural network that skip one or more layers of nonlinear processing.
C9165
“A priori” and “a posteriori” refer primarily to how, or on what basis, a proposition might be known. In general terms, a proposition is knowable a priori if it is knowable independently of experience, while a proposition knowable a posteriori is knowable on the basis of experience.
C9166
Algorithm. As of 2016, AlphaGo's algorithm uses a combination of machine learning and tree search techniques, combined with extensive training, both from human and computer play. It uses Monte Carlo tree search, guided by a "value network" and a "policy network," both implemented using deep neural network technology.
C9167
Definition: The Population Distribution is a form of probability distribution that measures the frequency with which the items or variables that make up the population are drawn or expected to be drawn for a given research study.
C9168
Time series analysis can be useful to see how a given asset, security, or economic variable changes over time. It can also be used to examine how the changes associated with the chosen data point compare to shifts in other variables over the same time period.
C9169
The law of large numbers, in probability and statistics, states that as a sample size grows, its mean gets closer to the average of the whole population. In the 16th century, mathematician Gerolama Cardano recognized the Law of Large Numbers but never proved it.
C9170
Discriminant analysis is statistical technique used to classify observations into non-overlapping groups, based on scores on one or more quantitative predictor variables. For example, a doctor could perform a discriminant analysis to identify patients at high or low risk for stroke.
C9171
An easy way to define the difference between frequency and relative frequency is that frequency relies on the actual values of each class in a statistical data set while relative frequency compares these individual values to the overall totals of all classes concerned in a data set.
C9172
Perhaps the most famous case ever of misleading statistics in the news is the case of Sally Clark, who was convicted of murdering her children. She was freed after it was found the statistics used in her murder trial were completely wrong.
C9173
Spark is capable of handling large-scale batch and streaming data to figure out when to cache data in memory and processing them up to 100 times faster than Hadoop-based MapReduce. First, you will learn how to install Spark with all new features from the latest Spark 2.0 release.
C9174
66.5%
C9175
Quartile deviation is the difference between “first and third quartiles” in any distribution. Standard deviation measures the “dispersion of the data set” that is relative to its mean.
C9176
Reinforcement learning (RL) is an area of machine learning concerned with how software agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning.
C9177
A precision-recall point is a point with a pair of x and y values in the precision-recall space where x is recall and y is precision. A precision-recall curve is created by connecting all precision-recall points of a classifier. Two adjacent precision-recall points can be connected by a straight line.
C9178
14:3826:41Suggested clip · 115 secondsCanonical correlation using SPSS - YouTubeYouTubeStart of suggested clipEnd of suggested clip
C9179
Summary: Population variance refers to the value of variance that is calculated from population data, and sample variance is the variance calculated from sample data. As a result both variance and standard deviation derived from sample data are more than those found out from population data.
C9180
The potential solutions include the following: Remove some of the highly correlated independent variables. Linearly combine the independent variables, such as adding them together. Perform an analysis designed for highly correlated variables, such as principal components analysis or partial least squares regression.
C9181
If slope = 0, as you increase one variable, the other variable doesn't change at all. This means no relationship.
C9182
For example, if we want to measure current obesity levels in a population, we could draw a sample of 1,000 people randomly from that population (also known as a cross section of that population), measure their weight and height, and calculate what percentage of that sample is categorized as obese.
C9183
Linear algebra is used in almost all compute-intensive tasks. It can efficiently be used to solve any linear or non-linear set of equations.
C9184
Each of the steps should take about 4–6 weeks' time. And in about 26 weeks since the time you started, and if you followed all of the above religiously, you will have a solid foundation in deep learning.
C9185
The Lorenz Curve is a graph that illustrates the distribution of income in the economy. It suggests that the distribution of income in the United States is unequal.
C9186
The difference is pretty simple: in squared error, you are penalizing large deviations more. The mean absolute error is a common measure of forecast error in time [2]series analysis, where the terms "mean absolute deviation" is sometimes used in confusion with the more standard definition of mean absolute deviation.
C9187
The cumulative distribution function of the standard normal distribution is, up to constant factors, the error function, erf ( x ) ≡ 2 π ∫ 0 x exp ( − y 2 ) d y , (Exercise 7).
C9188
Introduction. Categorical Data is the data that generally takes a limited number of possible values. Also, the data in the category need not be numerical, it can be textual in nature. All machine learning models are some kind of mathematical model that need numbers to work with.
C9189
Serial 7s (ie, serial subtraction of 7 from 100 to 65) has been proposed as a measure of attention and concentration. Spelling the word WORLD backwards is commonly used as a substitute for patients who cannot perform the serial 7s. Digit span is also used to measure attention and concentration.
C9190
In Convolutional Neural Networks, Filters detect spatial patterns such as edges in an image by detecting the changes in intensity values of the image. High pass filters are used to enhance the high-frequency parts of an image.
C9191
Performance Testing is a software testing process used for testing the speed, response time, stability, reliability, scalability and resource usage of a software application under particular workload. It is a subset of performance engineering and also known as “Perf Testing”.
C9192
Correlation in the error terms suggests that there is additional information in the data that has not been exploited in the current model. When the observations have a natural sequential order, the correlation is referred to as autocorrelation. Autocorrelation may occur for several reasons.
C9193
To reduce variability we perform multiple rounds of cross-validation with different subsets from the same data. We combine the validation results from these multiple rounds to come up with an estimate of the model's predictive performance. Cross-validation will give us a more accurate estimate of a model's performance.
C9194
Linear regression is the next step up after correlation. It is used when we want to predict the value of a variable based on the value of another variable. The variable we want to predict is called the dependent variable (or sometimes, the outcome variable).
C9195
Boosting is a general ensemble method that creates a strong classifier from a number of weak classifiers. This is done by building a model from the training data, then creating a second model that attempts to correct the errors from the first model.
C9196
In statistics, self-selection bias arises in any situation in which individuals select themselves into a group, causing a biased sample with nonprobability sampling. In such fields, a poll suffering from such bias is termed a self-selected listener opinion poll or "SLOP".
C9197
The general linear model requires that the response variable follows the normal distribution whilst the generalized linear model is an extension of the general linear model that allows the specification of models whose response variable follows different distributions.
C9198
Dense CNN is a type of Deep CNN in which each layer is connected with another layer deeper than itself.
C9199
Covariances have significant applications in finance and modern portfolio theory. For example, in the capital asset pricing model (CAPM), which is used to calculate the expected return of an asset, the covariance between a security and the market is used in the formula for one of the model's key variables, beta.