_id
stringlengths
2
6
text
stringlengths
3
395
title
stringclasses
1 value
C9800
Disentangled representation is an unsupervised learning technique that breaks down, or disentangles, each feature into narrowly defined variables and encodes them as separate dimensions. The goal is to mimic the quick intuition process of a human, using both “high” and “low” dimension reasoning.
C9801
Linear algebra is usually taken by sophomore math majors after they finish their calculus classes, but you don't need a lot of calculus in order to do it.
C9802
7 Answers. Gradient is covariant! The components of a vector contravariant because they transform in the inverse (i.e. contra) way of the vector basis. It is customary to denote these components with an upper index.
C9803
An agent is anything that can perceive its environment through sensors and acts upon that environment through effectors. A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the sensors, and other organs such as hands, legs, mouth, for effectors.
C9804
Classification is a technique where we categorize data into a given number of classes. The main goal of a classification problem is to identify the category/class to which a new data will fall under. Classifier: An algorithm that maps the input data to a specific category.
C9805
Positive and negative predictive values are influenced by the prevalence of disease in the population that is being tested. If we test in a high prevalence setting, it is more likely that persons who test positive truly have disease than if the test is performed in a population with low prevalence..
C9806
Data quality is important when applying Artificial Intelligence techniques, because the results of these solutions will be as good or bad as the quality of the data used. The algorithms that feed systems based on Artificial Intelligence can only assume that the data to be analyzed are reliable.
C9807
Ensemble learning is a machine learning paradigm where multiple models (often called “weak learners”) are trained to solve the same problem and combined to get better results. The main hypothesis is that when weak models are correctly combined we can obtain more accurate and/or robust models.
C9808
How to train your Deep Neural NetworkTraining data. Choose appropriate activation functions. Number of Hidden Units and Layers. Weight Initialization. Learning Rates. Hyperparameter Tuning: Shun Grid Search - Embrace Random Search. Learning Methods. Keep dimensions of weights in the exponential power of 2.More items•
C9809
The Agglomerative Hierarchical Clustering is the most common type of hierarchical clustering used to group objects in clusters based on their similarity.
C9810
In medical diagnosis, test sensitivity is the ability of a test to correctly identify those with the disease (true positive rate), whereas test specificity is the ability of the test to correctly identify those without the disease (true negative rate).
C9811
communalities is calculated sum of square factor loadings. Generally, an item factor loading is recommended higher than 0.30 or 0.33 cut value. So if an item load only one factor its communality will be 0.30*0.30 = 0.09.
C9812
Means and Variances of Random Variables: The mean of a discrete random variable, X, is its weighted average. Each value of X is weighted by its probability. To find the mean of X, multiply each value of X by its probability, then add all the products. The mean of a random variable X is called the expected value of X.
C9813
Both disparate impact and disparate treatment refer to discriminatory practices. Disparate treatment is intentional employment discrimination. For example, testing a particular skill of only certain minority applicants is disparate treatment.
C9814
If you're given the probability (percent) greater than x and you need to find x, you translate this as: Find b where p(X > b) = p (and p is given). Rewrite this as a percentile (less-than) problem: Find b where p(X < b) = 1 – p. This means find the (1 – p)th percentile for X.
C9815
Let's understand what the matrix of features is. The matrix of features is a term used in machine learning to describe the list of columns that contain independent variables to be processed, including all lines in the dataset. These lines in the dataset are called lines of observation.
C9816
random variable
C9817
The 1-proportion z test is used to test hypotheses regarding population proportions. Before you can proceed with entering the data into your calculator, you will need to symbolize the null and alternative hypotheses. For this example, let's define p as the proportion of 1-Euro coins that land heads up.
C9818
Scientific uncertainty generally means that there is a range of possible values within which the true value of the measurement lies. Further research on a topic or theory may reduce the level of uncertainty or the range of possible values.
C9819
Sensitivity and specificity are inversely proportional, meaning that as the sensitivity increases, the specificity decreases and vice versa.
C9820
Abstract. Hidden Markov Models (HMMs) provide a simple and effective frame- work for modelling time-varying spectral vector sequences. As a con- sequence, almost all present day large vocabulary continuous speech recognition (LVCSR) systems are based on HMMs.
C9821
They all contain elements of random selection. They all measure every member of the population of interest. They all contain elements of random selection.
C9822
The data structure which is being used in DFS is stack. The process is similar to BFS algorithm. In DFS, the edges that leads to an unvisited node are called discovery edges while the edges that leads to an already visited node are called block edges.
C9823
A latent variable is a variable that cannot be observed. The presence of latent variables, however, can be detected by their effects on variables that are observable. Most constructs in research are latent variables. Consider the psychological construct of anxiety, for example.
C9824
Logit models are used for discrete outcome modeling. This can be for binary outcomes (0 and 1) or for three or more outcomes (multinomial logit). It has nothing to do with binary or discrete outcomes. Tobit models are a form of linear regression.
C9825
Another common model for classification is the support vector machine (SVM). An SVM works by projecting the data into a higher dimensional space and separating it into different classes by using a single (or set of) hyperplanes. A single SVM does binary classification and can differentiate between two classes.
C9826
Interpolation search is an algorithm for searching for a key in an array that has been ordered by numerical values assigned to the keys (key values). It was first described by W. W. Peterson in 1957.
C9827
There are two major problems while training deep learning models is overfitting and underfitting of the model. Those problems are solved by data augmentation is a regularization technique that makes slight modifications to the images and used to generate data.
C9828
A (non-mathematical) definition I like by Miller (2017)3 is: Interpretability is the degree to which a human can understand the cause of a decision. The higher the interpretability of a machine learning model, the easier it is for someone to comprehend why certain decisions or predictions have been made.
C9829
A Convolutional neural network (CNN) is a neural network that has one or more convolutional layers and are used mainly for image processing, classification, segmentation and also for other auto correlated data. A convolution is essentially sliding a filter over the input.
C9830
Pattern Recognition is an engineering application of Machine Learning. Machine Learning deals with the construction and study of systems that can learn from data, rather than follow only explicitly programmed instructions whereas Pattern recognition is the recognition of patterns and regularities in data.
C9831
A statistic is a number that represents a property of the sample. For example, if we consider one math class to be a sample of the population of all math classes, then the average number of points earned by students in that one math class at the end of the term is an example of a statistic.
C9832
A one-way analysis of variance (ANOVA) is used when you have a categorical independent variable (with two or more categories) and a normally distributed interval dependent variable and you wish to test for differences in the means of the dependent variable broken down by the levels of the independent variable.
C9833
Owing to the dependence of an IIR filter's result upon its previous results, an IIR filter is necessarily recursive. However, certain recursive filters have finite impulse response, so a recursive filter does not necessarily have infinite impulse response.
C9834
Hidden layers, simply put, are layers of mathematical functions each designed to produce an output specific to an intended result. Hidden layers allow for the function of a neural network to be broken down into specific transformations of the data. Each hidden layer function is specialized to produce a defined output.
C9835
A p value is used in hypothesis testing to help you support or reject the null hypothesis. The p value is the evidence against a null hypothesis. The smaller the p-value, the stronger the evidence that you should reject the null hypothesis. On the other hand, a large p-value of .
C9836
bias(ˆθ) = Eθ(ˆθ) − θ. An estimator T(X) is unbiased for θ if EθT(X) = θ for all θ, otherwise it is biased.
C9837
Definition: Quota sampling is a sampling methodology wherein data is collected from a homogeneous group. It involves a two-step process where two variables can be used to filter information from the population. It can easily be administered and helps in quick comparison.
C9838
Class boundaries are the numbers used to separate classes. The size of the gap between classes is the difference between the upper class limit of one class and the lower class limit of the next class. In this case, gap=21.83−21.82=0.01 gap = 21.83 - 21.82 = 0.01 .
C9839
Batch gradient descent is a variation of the gradient descent algorithm that calculates the error for each example in the training dataset, but only updates the model after all training examples have been evaluated. One cycle through the entire training dataset is called a training epoch.
C9840
The scale-invariant feature transform (SIFT) is a feature detection algorithm in computer vision to detect and describe local features in images. SIFT keypoints of objects are first extracted from a set of reference images and stored in a database.
C9841
Your performance on the training data/the training error does not tell you how well your model is overall, but only how well it has learned the training data. The validation error tells you how well your learned model generalises, that means how well it fits to data that it has not been trained on.
C9842
Parallel stochastic gradient descent Parallel SGD, introduced by Zinkevich et al. [12] and shown in Algorithms 2 and 3, is one such technique and can be viewed as an improvement on model averaging. Model averaging convergence is dependent on the degree of convexity as a result of regularization.
C9843
If the set has an even number of terms, the median is the average of the middle two terms. For example, in the set {10, 12, 15, 20}, the median is the average of 12 and 15: 13.5. Since the numbers are consecutive, the mean and the median are the same.
C9844
Know the formula for the linear interpolation process. The formula is y = y1 + ((x – x1) / (x2 – x1)) * (y2 – y1), where x is the known value, y is the unknown value, x1 and y1 are the coordinates that are below the known x value, and x2 and y2 are the coordinates that are above the x value.
C9845
The rejection region is the interval, measured in the sampling distribution of the statistic under study, that leads to rejection of the null hypothesis H 0 in a hypothesis test.
C9846
A vector space is any set of objects with a notion of addition and scalar multiplication that behave like vectors in Rn.
C9847
You can tell if two random variables are independent by looking at their individual probabilities. If those probabilities don't change when the events meet, then those variables are independent. Another way of saying this is that if the two variables are correlated, then they are not independent.
C9848
Here it is in plain language. An OR of 1.2 means there is a 20% increase in the odds of an outcome with a given exposure. An OR of 2 means there is a 100% increase in the odds of an outcome with a given exposure. Or this could be stated that there is a doubling of the odds of the outcome.
C9849
One or two of the sections is the “rejection region“; if your test value falls into that region, then you reject the null hypothesis. A one tailed test with the rejection rejection in one tail. The critical value is the red line to the left of that region.
C9850
The Sampling Distribution of the Sample Mean. If repeated random samples of a given size n are taken from a population of values for a quantitative variable, where the population mean is μ (mu) and the population standard deviation is σ (sigma) then the mean of all sample means (x-bars) is population mean μ (mu).
C9851
The law of averages typically assumes that unnatural short-term “balance” must occur. This can also be known as “Gambler's Fallacy” and is not a real mathematical principle. The law of large numbers is important because it “guarantees” stable long-term results for the averages of random events.
C9852
When the response categories are ordered, you could run a multinomial regression model. The disadvantage is that you are throwing away information about the ordering. An ordinal logistic regression model preserves that information, but it is slightly more involved.
C9853
In mathematics, the geometric–harmonic mean M(x, y) of two positive real numbers x and y is defined as follows: we form the geometric mean of g0 = x and h0 = y and call it g1, i.e. g1 is the square root of xy. The geometric–harmonic mean is also designated as the harmonic–geometric mean. (cf. Wolfram MathWorld below.)
C9854
The difference is a matter of design. In the test of independence, observational units are collected at random from a population and two categorical variables are observed for each unit. In the goodness-of-fit test there is only one observed variable.
C9855
If a and b are two non-zero numbers, then the harmonic mean of a and b is a number H such that the numbers a, H, b are in H.P. We have H = 1/H = 1/2 (1/a + 1/b) ⇒ H = 2ab/a+b.
C9856
0:569:57Suggested clip · 118 secondsHow to Pass Reasoning Tests - Inductive Reasoning Sample YouTubeStart of suggested clipEnd of suggested clip
C9857
Correlation and Convolution are basic operations that we will perform to extract information from images. They are in some sense the simplest operations that we can perform on an image, but they are extremely useful. Shift-invariant means that we perform the same operation at every point in the image.
C9858
Unlike humans, artificial neural networks are fed with massive amount of data to learn. Also, real neurons do not stay on until the inputs change and the outputs may encode information using complex pulse arrangements.
C9859
The degrees of freedom in a multiple regression equals N-k-1, where k is the number of variables. The more variables you add, the more you erode your ability to test the model (e.g. your statistical power goes down).
C9860
KNN works by finding the distances between a query and all the examples in the data, selecting the specified number examples (K) closest to the query, then votes for the most frequent label (in the case of classification) or averages the labels (in the case of regression).
C9861
Like all regression analyses, the logistic regression is a predictive analysis. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables.
C9862
Prior probability represents what is originally believed before new evidence is introduced, and posterior probability takes this new information into account. A posterior probability can subsequently become a prior for a new updated posterior probability as new information arises and is incorporated into the analysis.
C9863
Dealing with imbalanced datasets entails strategies such as improving classification algorithms or balancing classes in the training data (data preprocessing) before providing the data as input to the machine learning algorithm. The later technique is preferred as it has wider application.
C9864
Characteristics of a Poisson Distribution The probability that an event occurs in a given time, distance, area, or volume is the same. Each event is independent of all other events. For example, the number of people who arrive in the first hour is independent of the number who arrive in any other hour.
C9865
0:002:44Suggested clip · 118 secondsGeometric Distribution: Mean - YouTubeYouTubeStart of suggested clipEnd of suggested clip
C9866
The rate at which gases diffuse is inversely proportional to the square root of their densities.
C9867
Hinge Loss - This has been used in SVMs (Soft Margin). The aim of this loss function is to penalize miss-classification. Cross-Entropy Loss - Probably one of best loss functions being used in classification. Now-a-days this is being used in many advanced machine learning models like deep neural networks etc.
C9868
It gives us tools that can be used to go forward. Then the community of scientists have defined what "graphical models" are. Tools have been developed that apply to models that match this definition. NN is one of them, it is a graphical model.
C9869
area under the curve
C9870
2 Key Challenges of Streaming Data and How to Solve ThemStreaming Data is Very Complex. Streaming data is particularly challenging to handle because it is continuously generated by an array of sources and devices and is delivered in a wide variety of formats. Business Wants Data, But IT Can't Keep Up.
C9871
A distribution is skewed if one of its tails is longer than the other. The first distribution shown has a positive skew. This means that it has a long tail in the positive direction. The distribution below it has a negative skew since it has a long tail in the negative direction.
C9872
When I calculate population variance, I then divide the sum of squared deviations from the mean by the number of items in the population (in example 1 I was dividing by 12). When I calculate sample variance, I divide it by the number of items in the sample less one.
C9873
Hypergeometric Formula.. The hypergeometric distribution has the following properties: The mean of the distribution is equal to n * k / N . The variance is n * k * ( N - k ) * ( N - n ) / [ N2 * ( N - 1 ) ] .
C9874
Introduction. The standard deviation is a measure of the spread of scores within a set of data. Usually, we are interested in the standard deviation of a population. However, as we are often presented with data from a sample only, we can estimate the population standard deviation from a sample standard deviation.
C9875
Functions of Random Variables One law is called the “weak” law of large numbers, and the other is called the “strong” law of large numbers. The weak law describes how a sequence of probabilities converges, and the strong law describes how a sequence of random variables behaves in the limit.
C9876
Gradient Descent runs iteratively to find the optimal values of the parameters corresponding to the minimum value of the given cost function, using calculus. Mathematically, the technique of the 'derivative' is extremely important to minimise the cost function because it helps get the minimum point.
C9877
SVM Kernel Functions SVM algorithms use a set of mathematical functions that are defined as the kernel. The function of kernel is to take data as input and transform it into the required form. For example linear, nonlinear, polynomial, radial basis function (RBF), and sigmoid.
C9878
Statistics is generally considered a prerequisite to the field of applied machine learning. We need statistics to help transform observations into information and to answer questions about samples of observations.
C9879
To identify a random error, the measurement must be repeated a small number of times. If the observed value changes apparently randomly with each repeated measurement, then there is probably a random error. The random error is often quantified by the standard deviation of the measurements.
C9880
It is used to predict values of a continuous response variable using one or more explanatory variables and can also identify the strength of the relationships between these variables (these two goals of regression are often referred to as prediction and explanation).
C9881
Steps 3/4: Test Statistic and p-Value. This is the heart of a hypothesis test. Definition: The p-value is the probability of getting your sample, or a sample even further from H0, if H0 is true.
C9882
The larger the absolute value of the t-value, the smaller the p-value, and the greater the evidence against the null hypothesis.
C9883
To find the area between two positive z scores takes a couple of steps. First use the standard normal distribution table to look up the areas that go with the two z scores. Next subtract the smaller area from the larger area. For example, to find the area between z1 = .
C9884
30.4. Introduction. A matrix norm is a number defined in terms of the entries of the matrix. The norm is a useful quantity which can give important information about a matrix.
C9885
Conclusion. Linear Regression is the process of finding a line that best fits the data points available on the plot, so that we can use it to predict output values for inputs that are not present in the data set we have, with the belief that those outputs would fall on the line.
C9886
The precision-recall curve shows the tradeoff between precision and recall for different threshold. A high area under the curve represents both high recall and high precision, where high precision relates to a low false positive rate, and high recall relates to a low false negative rate.
C9887
Nonlinear filters: Non-linear functions of signals. Examples: thresholding, image equalisation, or median filtering.
C9888
When comparing data samples from different populations, covariance is used to determine how much two random variables vary together, whereas correlation is used to determine when a change in one variable can result in a change in another. Both covariance and correlation measure linear relationships between variables.
C9889
The Google Goggles app is an image-recognition mobile app that uses visual search technology to identify objects through a mobile device's camera. Users can take a photo of a physical object, and Google searches and retrieves information about the image.
C9890
You description is confusing, but it is totally possible to have test error both lower and higher than training error. A lower training error is expected when a method easily overfits to the training data, yet, poorly generalizes.
C9891
A wide-column store (or extensible record stores) is a type of NoSQL database. It uses tables, rows, and columns, but unlike a relational database, the names and format of the columns can vary from row to row in the same table. A wide-column store can be interpreted as a two-dimensional key–value store.
C9892
Qualitative Variables - Variables that are not measurement variables. Their values do not result from measuring or counting. Examples: hair color, religion, political party, profession. Designator - Values that are used to identify individuals in a table.
C9893
​Cross-sectional studies are observational studies that collect information about individuals at a specific point in time or over a very short period of time. For the lung cancer​ study, it could be that individuals develop cancer after the data are​ collected, so the study will not give the full picture.
C9894
Implementing Deep Learning Methods and Feature Engineering for Text Data: FastText. Overall, FastText is a framework for learning word representations and also performing robust, fast and accurate text classification. The framework is open-sourced by Facebook on GitHub.
C9895
The distinction between probability and likelihood is fundamentally important: Probability attaches to possible results; likelihood attaches to hypotheses. Explaining this distinction is the purpose of this first column. Possible results are mutually exclusive and exhaustive.
C9896
The adjusted R-squared compensates for the addition of variables and only increases if the new predictor enhances the model above what would be obtained by probability. Conversely, it will decrease when a predictor improves the model less than what is predicted by chance.
C9897
Genetic Algorithms are a type of learning algorithm, that uses the idea that crossing over the weights of two good neural networks, would result in a better neural network.
C9898
Abstract. A memory-based learning system is an extended memory management system that decomposes the input space either statically or dynamically into subregions for the purpose of storing and retrieving functional information.
C9899
A residual plot is typically used to find problems with regression. Some data sets are not good candidates for regression, including: Heteroscedastic data (points at widely varying distances from the line). Data that is non-linearly associated.