markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network. | # Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
print(trainX.shape)
print(trainY.shape) | intro-to-tflearn/TFLearn_Digit_Recognition.ipynb | abhi1509/deep-learning | mit |
Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title. | # Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(5) | intro-to-tflearn/TFLearn_Digit_Recognition.ipynb | abhi1509/deep-learning | mit |
Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer. | # Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
#Input layer
net = tflearn.input_data([None, 784])#len(trainX[0])])
#Hidden layers
net = tflearn.fully_connected(net, 392, activation="ReLU") #ReLU -> f(x)=max(x,0)
net = tflearn.fully_connected(net, 196, activation="ReLU") #ReLU -> f(x)=max(x,0)
#Output layer
net = tflearn.fully_connected(net, 10, activation="softmax") #ReLU -> f(x)=max(x,0)
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model() | intro-to-tflearn/TFLearn_Digit_Recognition.ipynb | abhi1509/deep-learning | mit |
Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely! | # Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20) | intro-to-tflearn/TFLearn_Digit_Recognition.ipynb | abhi1509/deep-learning | mit |
Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy! | # Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy) | intro-to-tflearn/TFLearn_Digit_Recognition.ipynb | abhi1509/deep-learning | mit |
Poisson Distribution
Independent interval occurrences in an interval.
Used for count-based distributions.
$$
f(k;\lambda)=Pr(X = k)=\frac{\lambda^ke^{-k}}{k!}
$$
Here, e is the Euler's number, k is the number of occurrences for which the probability is going to be determined, and lambda is the mean number of occurrences.
Example:
Let's understand this with an example. The number of cars that pass through a bridge in an hour is 20. What would be the probability of 23 cars passing through the bridge in an hour?
```Python
from scipy.stats import poisson
rv = poisson(20)
rv.pmf(23)
Result: 0.066881473662401172
```
With the Poisson function, we define the mean value, which is 20 cars. The rv.pmf function gives the probability, which is around 6%, that 23 cars will pass the bridge.
Bernoulli Distribution
Can perform an experiment with two possible outcomes: success or failure.
Success has a probability of p, and failure has a probability of 1 - p. A random variable that takes a 1 value in case of a success and 0 in case of failure is called a Bernoulli distribution. The probability distribution function can be written as:
$$
P(n)=\begin{cases}1-p & for & n = 0\p & for & n = 1\end{cases}
$$
It can also be written like this:
$$
P(n)=p^n(1-p)^{1-n}
$$
The distribution function can be written like this:
$$
D(n) = \begin{cases}1-p & for & n=0\1 & for & n=1\end{cases}
$$
Example: Voting in an election is a good example of the Bernoulli distribution. A Bernoulli distribution can be generated using the bernoulli.rvs() function of the SciPy package. | from scipy.stats import bernoulli
bernoulli.rvs(0.7, size=100) | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
z-score
Expresses the value of a distribution in std with respect to mean.
$$
z = \frac{X - \mu}{\sigma}
$$
Here, X is the value in the distribution, μ is the mean of the distribution, and σ is the
standard deviation of the distribution.
Example: A classroom has 60 students in it and they have just got their mathematics examination score. We simulate the score of these 60 students with a normal distribution using the following command: | import numpy as np
class_score = np.random.normal(50, 10, 60).round()
plt.hist(class_score, 30, normed=True) # Number of breaks is 30
plt.show() | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
The score of each student can be converted to a z-score using the following functions: | from scipy import stats
stats.zscore(class_score) | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
So, a student with a score of 60 out of 100 has a z-score of 1.334. To make more sense of the z-score, we'll use the standard normal table.
This table helps in determining the probability of a score.
We would like to know what the probability of getting a score above 60 would be.
The standard normal table can help us in determining the probability of the occurrence of the score, but we do not have to perform the cumbersome task of finding the value by looking through the table and finding the probability. This task is made simple by the cdf function, which is the cumulative distribution function: | prob = 1 - stats.norm.cdf(1.334)
prob | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
The cdf function gives the probability of getting values up to the z-score of 1.334, and doing a minus one of it will give us the probability of getting a z-score, which is above it. In other words, 0.09 is the probability of getting marks above 60.
Let's ask another question, "how many students made it to the top 20% of the class?"
Now, to get the z-score at which the top 20% score marks, we can use the ppf function in SciPy: | stats.norm.ppf(0.80) | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
The z-score for the preceding output that determines whether the top 20% marks are at 0.84 is as follows: | (0.84 * class_score.std()) + class_score.mean() | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
We multiply the z-score with the standard deviation and then add the result with the mean of the distribution. This helps in converting the z-score to a value in the distribution. The 55.83 marks means that students who have marks more than this are in the top 20% of the distribution.
The z-score is an essential concept in statistics, which is widely used. Now you can understand that it is basically used in standardizing any distribution so that it can be compared or inferences can be derived from it.
### p-value
A p-value is the probability of rejecting a null-hypothesis when the hypothesis is proven true.
If the p-value is equal to or less than the significance level (α), then the null hypothesis is inconsistent and it needs to be rejected.
Let's understand this concept with an example where the null hypothesis is that it is common for students to score 68 marks in mathematics.
Let's define the significance level at 5%. If the p-value is less than 5%, then the null hypothesis is rejected and it is not common to score 68 marks in mathematics.
Let's get the z-score of 68 marks: | zscore = ( 68 - class_score.mean() ) / class_score.std()
zscore | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
prob = 1 - stats.norm.cdf(zscore)
prob | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit | |
One-tailed and two-tailed tests
The example in the previous section was an instance of a one-tailed test where the null hypothesis is rejected or accepted based on one direction of the normal distribution.
In a two-tailed test, both the tails of the null hypothesis are used to test the hypothesis.
In a two-tailed test, when a significance level of 5% is used, then it is distributed equally in the both directions, that is, 2.5% of it in one direction and 2.5% in the other direction.
Let's understand this with an example. The mean score of the mathematics exam at a national level is 60 marks and the standard deviation is 3 marks.
The mean marks of a class are 53. The null hypothesis is that the mean marks of the class are similar to the national average. Let's test this hypothesis by first getting the z-score 60: | zscore = (53-50)/3.0
zscore
prob = stats.norm.cdf(zscore)
prob | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
Type 1 and Type 2 errors
Type 1 error is a type of error that occurs when there is a rejection of the null hypothesis when it is actually true. This kind of error is also called an error of the first kind and is equivalent to false positives.
Let's understand this concept using an example. There is a new drug that is being developed and it needs to be tested on whether it is effective in combating diseases. The null hypothesis is that it is not effective in combating diseases.
The significance level is kept at 5% so that the null hypothesis can be accepted confidently 95% of the time. However, 5% of the time, we'll accept the rejecttion of the hypothesis although it had to be accepted, which means that even though the drug is ineffective, it is assumed to be effective.
The Type 1 error is controlled by controlling the significance level, which is alpha. Alpha is the highest probability to have a Type 1 error. The lower the alpha, the lower will be the Type 1 error.
The Type 2 error is the kind of error that occurs when we do not reject a null hypothesis that is false. This error is also called the error of the second kind and is equivalent to a false negative.
This kind of error occurs in a drug scenario when the drug is assumed to be ineffective but is actually it is effective.
These errors can be controlled one at a time. If one of the errors is lowered, then the other one increases. It depends on the use case and the problem statement that the analysis is trying to address, and depending on it, the appropriate error should reduce. In the case of this drug scenario, typically, a Type 1 error should be lowered because it is better to ship a drug that is confidently effective.
Confidence Interval
A confidence interval is a type of interval statistics for a population parameter. The confidence interval helps in determining the interval at which the population mean can be defined.
Let's try to understand this concept by using an example. Let's take the height of every man in Kenya and determine with 95% confidence interval the average of height of Kenyan men at a national level.
Let's take 50 men and their height in centimeters: | height_data = np.array([ 186.0, 180.0, 195.0, 189.0, 191.0,
177.0, 161.0, 177.0, 192.0, 182.0,
185.0, 192.0, 173.0, 172.0, 191.0,
184.0, 193.0, 182.0, 190.0, 185.0,
181.0,188.0, 179.0, 188.0, 170.0, 179.0,
180.0, 189.0, 188.0, 185.0, 170.0,
197.0, 187.0,182.0, 173.0, 179.0,184.0,
177.0, 190.0, 174.0, 203.0, 206.0, 173.0,
169.0, 178.0,201.0, 198.0, 166.0,171.0, 180.0])
plt.hist(height_data, 30, normed=True, color='r')
plt.show()
# The mean of the distribution
height_data.mean() | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
So, the average height of a man from the sample is 183.4 cm.
To determine the confidence interval, we'll now define the standard error of the mean.
The standard error of the mean is the deviation of the sample mean from the population mean. It is defined using the following formula:
$$
SE_{\overline{x}} = \frac{s}{\sqrt{n}}
$$
Here, s is the standard deviation of the sample, and n is the number of elements of the sample.
This can be calculated using the sem() function of the SciPy package: | stats.sem(height_data) | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
So, there is a standard error of the mean of 1.38 cm. The lower and upper limit of the confidence interval can be determined by using the following formula:
Upper/Lower limit = mean(height) + / - sigma * SEmean(x)
For lower limit:
183.24 + (1.96 * 1.38) = 185.94
For upper limit:
183.24 - (1.96*1.38) = 180.53
A 1.96 standard deviation covers 95% of area in the normal distribution.
We can confidently say that the population mean lies between 180.53 cm and 185.94 cm of height.
New Example: Let's assume we take a sample of 50 people, record their height, and then repeat this process 30 times. We can then plot the averages of each sample and observe the distribution. | average_height = []
for i in range(30):
# Create a sample of 50 with mean 183 and standard deviation 10
sample50 = np.random.normal(183, 10, 50).round()
# Add the mean on sample of 50 into average_height list
average_height.append(sample50.mean())
# Plot it with 10 bars and normalization
plt.hist(average_height, 10, normed=True)
plt.show() | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
You can observe that the mean ranges from 180 to 187 cm when we simulated the average height of 50 sample men, which was taken 30 times.
Let's see what happens when we sample 1000 men and repeat the process 30 times: | average_height = []
for i in range(30):
# Create a sample of 50 with mean 183 and standard deviation 10
sample1000 = np.random.normal(183, 10, 1000).round()
average_height.append(sample1000.mean())
plt.hist(average_height, 10, normed=True)
plt.show() | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
As you can see, the height varies from 182.4 cm and to 183.5 cm. What does this mean?
It means that as the sample size increases, the standard error of the mean decreases, which also means that the confidence interval becomes narrower, and we can tell with certainty the interval that the population mean would lie on.
Correlation
In statistics, correlation defines the similarity between two random variables. The most commonly used correlation is the Pearson correlation and it is defined by the following:
$$
\rho_{X,Y} = \frac{cov(X,Y)}{\sigma_{x}\sigma_{y}} = \frac{E[(X - \mu_{X})(Y - \mu_{Y})]}{\sigma_{x}\sigma_{y}}
$$
The preceding formula defines the Pearson correlation as the covariance between X and Y, which is divided by the standard deviation of X and Y, or it can also be defined as the expected mean of the sum of multiplied difference of random variables with respect to the mean divided by the standard deviation of X and Y. Let's understand this with an example. Let's take the mileage and horsepower of various cars and see if there is a relation between the two. This can be achieved using the pearsonr function in the SciPy package: | mpg = [21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8, 19.2, 17.8,
16.4, 17.3, 15.2, 10.4, 10.4, 14.7, 32.4, 30.4, 33.9, 21.5, 15.5,
15.2, 13.3, 19.2, 27.3, 26.0, 30.4, 15.8,19.7, 15.0, 21.4]
hp = [110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180, 180, 180,
205, 215, 230, 66, 52, 65, 97, 150, 150, 245, 175, 66, 91, 113, 264,
175, 335, 109]
stats.pearsonr(mpg,hp) | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
The first value of the output gives the correlation between the horsepower and the mileage
The second value gives the p-value.
So, the first value tells us that it is highly negatively correlated and the p-value tells us that there is significant correlation between them: | plt.scatter(mpg, hp, color='r')
plt.show() | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
Let's look into another correlation called the Spearman correlation. The Spearman correlation applies to the rank order of the values and so it provides a monotonic relation between the two distributions. It is useful for ordinal data (data that has an order, such as movie ratings or grades in class) and is not affected by outliers.
Let's get the Spearman correlation between the miles per gallon and horsepower. This can be achieved using the spearmanr() function in the SciPy package: | stats.spearmanr(mpg, hp) | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
We can see that the Spearman correlation is -0.89 and the p-value is significant.
Let's do an experiment in which we introduce a few outlier values in the data and see how the Pearson and Spearman correlation gets affected: | mpg = [21.0, 21.0, 22.8, 21.4, 18.7, 18.1, 14.3, 24.4, 22.8,
19.2, 17.8, 16.4, 17.3, 15.2, 10.4, 10.4, 14.7, 32.4, 30.4,
33.9, 21.5, 15.5, 15.2, 13.3, 19.2, 27.3, 26.0, 30.4, 15.8,
19.7, 15.0, 21.4, 120, 3]
hp = [110, 110, 93, 110, 175, 105, 245, 62, 95, 123, 123, 180,
180, 180, 205, 215, 230, 66, 52, 65, 97, 150, 150, 245,
175, 66, 91, 113, 264, 175, 335, 109, 30, 600]
plt.scatter(mpg, hp)
plt.show() | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
From the plot, you can clearly make out the outlier values. Lets see how the correlations get affected for both the Pearson and Spearman correlation | stats.pearsonr(mpg, hp)
stats.spearmanr(mpg, hp) | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
We can clearly see that the Pearson correlation has been drastically affected due to the outliers, which are from a correlation of 0.89 to 0.47.
The Spearman correlation did not get affected much as it is based on the order rather than the actual value in the data.
Z-test vs T-test
We have already done a few Z-tests before where we validated our null hypothesis.
A T-distribution is similar to a Z-distribution—it is centered at zero and has a basic bell shape, but its shorter and flatter around the center than the Z-distribution.
The T-distributions' standard deviation is usually proportionally larger than the Z, because of which you see the fatter tails on each side.
The t distribution is usually used to analyze the population when the sample is small.
The Z-test is used to compare the population mean against a sample or compare the population mean of two distributions with a sample size greater than 30. An example of a Z-test would be comparing the heights of men from different ethnicity groups.
The T-test is used to compare the population mean against a sample, or compare the population mean of two distributions with a sample size less than 30, and when you don't know the population's standard deviation.
Let's do a T-test on two classes that are given a mathematics test and have 10 students in each class:
To perform the T-test, we can use the ttest_ind() function in the SciPy package: | class1_score = np.array([45.0, 40.0, 49.0, 52.0, 54.0, 64.0, 36.0, 41.0, 42.0, 34.0])
class2_score = np.array([75.0, 85.0, 53.0, 70.0, 72.0, 93.0, 61.0, 65.0, 65.0, 72.0])
stats.ttest_ind(class1_score,class2_score) | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
The first value in the output is the calculated t-statistics, whereas the second value is the p-value and p-value shows that the two distributions are not identical.
The F distribution
The F distribution is also known as Snedecor's F distribution or the Fisher–Snedecor distribution.
An f statistic is given by the following formula:
$$
f = {[{s_1^2}/{\sigma_1^2}]}{[{s_2^2}/{\sigma_2^2}]}
$$
Here, s 1 is the standard deviation of a sample 1 with an $n_1$ size, $s_2$ is the standard deviation of a sample 2, where the size $n_2σ_1$ is the population standard deviation of a sample $1σ_2$ is the population standard deviation of a sample 12.
The distribution of all the possible values of f statistics is called F distribution. The d1 and d2 represent the degrees of freedom in the following chart:
The chi-square distribution
The chi-square statistics are defined by the following formula:
$$
X^2 = [(n-1)*s^2]/\sigma^2
$$
Here, n is the size of the sample, s is the standard deviation of the sample, and σ is the standard deviation of the population.
If we repeatedly take samples and define the chi-square statistics, then we can form a chi-square distribution, which is defined by the following probability density function:
$$
Y = Y_0 * (X^2)^{(v/2-1)} * e^{-X2/2}
$$
Here, $Y_0$ is a constant that depends on the number of degrees of freedom, $Χ_2$ is the chi-square statistic, $v = n - 1$ is the number of degrees of freedom, and e is a constant equal to the base of the natural logarithm system.
$Y_0$ is defined so that the area under the chi-square curve is equal to one.
The Chi-square test can be used to test whether the observed data differs significantly from the expected data. Let's take the example of a dice. The dice is rolled 36 times and the probability that each face should turn upwards is 1/6. So, the expected and observed distribution is as follows: | expected = np.array([6,6,6,6,6,6])
observed = np.array([7, 5, 3, 9, 6, 6]) | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
The null hypothesis in the chi-square test is that the observed value is similar to the
expected value.
The chi-square can be performed using the chisquare function in the SciPy package: | stats.chisquare(observed,expected) | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
The first value is the chi-square value and the second value is the p-value, which is very high. This means that the null hypothesis is valid and the observed value is similar to the expected value.
The chi-square test of independence is a statistical test used to determine whether two categorical variables are independent of each other or not.
Let's take the following example to see whether there is a preference for a book based on the gender of people reading it.
The Chi-Square test of independence can be performed using the chi2_contingency function in the SciPy package: | men_women = np.array([[100, 120, 60],[350, 200, 90]])
stats.chi2_contingency(men_women) | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
The first value is the chi-square value,
The second value is the p-value, which is very small, and means that there is an
association between the gender of people and the genre of the book they read.
The third value is the degrees of freedom.
The fourth value, which is an array, is the expected frequencies.
Anova
Analysis of Variance (ANOVA) is a statistical method used to test differences between two or more means.This test basically compares the means between groups and determines whether any of these means are significantly different from each other:
$$
H_0 : \mu_1 = \mu_2 = \mu_3 = ... = \mu_k
$$
ANOVA is a test that can tell you which group is significantly different from each other. Let's take the height of men who are from three different countries and see if their heights are significantly different from others: | country1 = np.array([ 176., 201., 172., 179., 180., 188., 187., 184., 171.,
181., 192., 187., 178., 178., 180., 199., 185., 176.,
207., 177., 160., 174., 176., 192., 189., 187., 183.,
180., 181., 200., 190., 187., 175., 179., 181., 183.,
171., 181., 190., 186., 185., 188., 201., 192., 188.,
181., 172., 191., 201., 170., 170., 192., 185., 167.,
178., 179., 167., 183., 200., 185.])
country2 = np.array([177., 165., 185., 187., 175., 172.,179., 192.,169.,
167., 162., 165., 188., 194., 187., 175., 163., 178.,
197., 172., 175., 185., 176., 171., 172., 186., 168.,
178., 191., 192., 175., 189., 178., 181., 170., 182.,
166., 189., 196., 192., 189., 171., 185., 198., 181.,
167., 184., 179., 178., 193., 179., 177., 181., 174.,
171., 184., 156., 180., 181., 187.])
country3 = np.array([ 191.,173., 175., 200., 190.,191.,185.,190.,184.,190.,
191., 184., 167., 194., 195., 174., 171., 191.,
174., 177., 182., 184., 176., 180., 181., 186., 179.,
176., 186., 176., 184., 194., 179., 171., 174., 174.,
182., 198., 180., 178., 200., 200., 174., 202., 176.,
180., 163., 159., 194., 192., 163., 194., 183., 190.,
186., 178., 182., 174., 178., 182.])
stats.f_oneway(country1,country2,country3) | _oldnotebooks/Inferential_Statistics.ipynb | eneskemalergin/OldBlog | mit |
Hyperparameters for training the model follow the same pattern as Word2Vec. FastText supports the folllowing parameters from the original word2vec -
- model: Training architecture. Allowed values: cbow, skipgram (Default cbow)
- size: Size of embeddings to be learnt (Default 100)
- alpha: Initial learning rate (Default 0.025)
- window: Context window size (Default 5)
- min_count: Ignore words with number of occurrences below this (Default 5)
- loss: Training objective. Allowed values: ns, hs, softmax (Default ns)
- sample: Threshold for downsampling higher-frequency words (Default 0.001)
- negative: Number of negative words to sample, for ns (Default 5)
- iter: Number of epochs (Default 5)
- sorted_vocab: Sort vocab by descending frequency (Default 1)
- threads: Number of threads to use (Default 12)
In addition, FastText has two additional parameters -
- min_n: min length of char ngrams to be used (Default 3)
- max_n: max length of char ngrams to be used for (Default 6)
These control the lengths of character ngrams that each word is broken down into while training and looking up embeddings. If max_n is set to 0, or to be lesser than min_n, no character ngrams are used, and the model effectively reduces to Word2Vec. | model = FastText.train(ft_home, lee_train_file, size=50, alpha=0.05, min_count=10)
print(model) | docs/notebooks/FastText_Tutorial.ipynb | macks22/gensim | lgpl-2.1 |
Continuation of training with FastText models is not supported.
Saving/loading models
Models can be saved and loaded via the load and save methods. | model.save('saved_fasttext_model')
loaded_model = FastText.load('saved_fasttext_model')
print(loaded_model) | docs/notebooks/FastText_Tutorial.ipynb | macks22/gensim | lgpl-2.1 |
The save_word2vec_method causes the vectors for ngrams to be lost. As a result, a model loaded in this way will behave as a regular word2vec model.
Word vector lookup
FastText models support vector lookups for out-of-vocabulary words by summing up character ngrams belonging to the word. | print('night' in model.wv.vocab)
print('nights' in model.wv.vocab)
print(model['night'])
print(model['nights']) | docs/notebooks/FastText_Tutorial.ipynb | macks22/gensim | lgpl-2.1 |
The word vector lookup operation only works if atleast one of the component character ngrams is present in the training corpus. For example - | # Raises a KeyError since none of the character ngrams of the word `axe` are present in the training data
model['axe'] | docs/notebooks/FastText_Tutorial.ipynb | macks22/gensim | lgpl-2.1 |
The in operation works slightly differently from the original word2vec. It tests whether a vector for the given word exists or not, not whether the word is present in the word vocabulary. To test whether a word is present in the training word vocabulary - | # Tests if word present in vocab
print("word" in model.wv.vocab)
# Tests if vector present for word
print("word" in model) | docs/notebooks/FastText_Tutorial.ipynb | macks22/gensim | lgpl-2.1 |
Similarity operations
Similarity operations work the same way as word2vec. Out-of-vocabulary words can also be used, provided they have atleast one character ngram present in the training data. | print("nights" in model.wv.vocab)
print("night" in model.wv.vocab)
model.similarity("night", "nights") | docs/notebooks/FastText_Tutorial.ipynb | macks22/gensim | lgpl-2.1 |
Syntactically similar words generally have high similarity in FastText models, since a large number of the component char-ngrams will be the same. As a result, FastText generally does better at syntactic tasks than Word2Vec. A detailed comparison is provided here.
Other similarity operations - | # The example training corpus is a toy corpus, results are not expected to be good, for proof-of-concept only
model.most_similar("nights")
model.n_similarity(['sushi', 'shop'], ['japanese', 'restaurant'])
model.doesnt_match("breakfast cereal dinner lunch".split())
model.most_similar(positive=['baghdad', 'england'], negative=['london'])
model.accuracy(questions='questions-words.txt')
# Word Movers distance
sentence_obama = 'Obama speaks to the media in Illinois'.lower().split()
sentence_president = 'The president greets the press in Chicago'.lower().split()
# Remove their stopwords.
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
sentence_obama = [w for w in sentence_obama if w not in stopwords]
sentence_president = [w for w in sentence_president if w not in stopwords]
# Compute WMD.
distance = model.wmdistance(sentence_obama, sentence_president)
distance | docs/notebooks/FastText_Tutorial.ipynb | macks22/gensim | lgpl-2.1 |
Below we define a simple exponential decay function of the form
$$y(x)=a e^{-b x}+c$$ | #Here's a simple test function of an exponential decay
def func(x, *params):
#this is the important part, if you want to pass an unspecified number of parameters
#you need to unpack the parameters list in the function definition and then you need
#to specify initial guesses when using curve_fit
return params[0] * exp(-params[1] * x) + params[2]
#Here we generate some fake data
xdata = linspace(0, 4, 100)
y = func(xdata, 2.5, 1.3, 0.5)
#add gaussian white noise
ydata = y + 0.1 * random.normal(size=len(xdata))
#perform the curve_fit, NOTE: you have to give guesses so that curve fit can determine
#the correct number of parameters for the function func
popt, pcov = curve_fit(func, xdata, ydata,p0=ones(3))
#generate a nice fit
x_fit=linspace(0,4,1024)
fit = func(x_fit,*popt) #you need to use the '*' operator to unpack the array specifically
plot(xdata,ydata,'.',x_fit,fit,'--r')
figure()
plot(xdata,ydata-func(xdata,*popt),'ro')
#Let's try writing a function that fits lorentzian peaks
def lor(x, *p):
toReturn = zeros(len(x)) #initialize our returned array
toReturn += p[0] #first parameter is the offset
if (len(p)-1)%3 != 0:
#Here's where we should raise an error
pass
for i in range(1,len(p),3):
toReturn += p[i]/(((x-p[i+1])/p[i+2])**2+1)
return toReturn
#Here we generate some fake data
xdata = linspace(0, 100, 2048)
p = [1,1,24,5,2,56,10]
y = lor(xdata, *p)
#add gaussian white noise
ydata = y + 0.2 * random.normal(size=len(xdata))
#perform the curve_fit, NOTE: you have to give guesses so that curve fit can determine
#the correct number of parameters for the function func
popt, pcov = curve_fit(lor, xdata, ydata,p0=p)
#generate a nice fit
fit = lor(xdata,*popt) #you need to use the '*' operator to unpack the array specifically
plot(xdata,ydata,'.',xdata,fit,'--r')
figure()
plot(xdata,ydata-lor(xdata,*popt),'ro') | notebooks/CurveFitTest.ipynb | david-hoffman/scripts | apache-2.0 |
2D Fitting
Here I've taken the code from here.
There's a better definition of a skewed gaussian available here. | #define model function and pass independant variables x and y as a list
def twoD_Gaussian(xdata_tuple, amplitude, xo, yo, sigma_x, sigma_y, theta, offset):
(x, y) = xdata_tuple
xo = float(xo)
yo = float(yo)
a = (np.cos(theta)**2)/(2*sigma_x**2) + (np.sin(theta)**2)/(2*sigma_y**2)
b = -(np.sin(2*theta))/(4*sigma_x**2) + (np.sin(2*theta))/(4*sigma_y**2)
c = (np.sin(theta)**2)/(2*sigma_x**2) + (np.cos(theta)**2)/(2*sigma_y**2)
g = offset + amplitude*np.exp( - (a*((x-xo)**2) + 2*b*(x-xo)*(y-yo)
+ c*((y-yo)**2)))
return g.ravel()
# Create x and y indices
x = np.linspace(0, 200, 201)
y = np.linspace(0, 200, 201)
x, y = np.meshgrid(x, y)
#create data
data = twoD_Gaussian((x, y), 3, 100, 100, 20, 40, pi/4, 10)
# plot twoD_Gaussian data generated above
plt.figure()
plt.imshow(data.reshape(201, 201),origin='bottom')
plt.colorbar()
# add some noise to the data and try to fit the data generated beforehand
initial_guess = (3,100,100,20,40,0,10)
data_noisy = data + 0.2*np.random.normal(size=data.shape)
popt, pcov = curve_fit(twoD_Gaussian, (x, y), data_noisy, p0=initial_guess)
#And plot the results:
data_fitted = twoD_Gaussian((x, y), *popt)
fig, ax = plt.subplots(1, 1)
ax.hold(True)
data_noisy.shape = (201, 201)
ax.imshow(data_noisy.reshape(201, 201), origin='bottom',
extent=(x.min(), x.max(), y.min(), y.max()))
ax.contour(x, y, data_fitted.reshape(201, 201), 8, colors='w')
# Create x and y indices
x = arange(32)
y = arange(32)
x, y = np.meshgrid(x, y)
#create data
data = twoD_Gaussian((x, y), 10, 14, 17, 5, 10, pi/12, 0)
# plot twoD_Gaussian data generated above
plt.figure()
plt.matshow(data.reshape(32, 32),origin='bottom')
plt.colorbar()
# add some noise to the data and try to fit the data generated beforehand
initial_guess = (3,16,16,5,5,0,10)
data_noisy = data + np.random.poisson(4, size=data.shape)
popt, pcov = curve_fit(twoD_Gaussian, (x, y), data_noisy, p0=initial_guess)
#And plot the results:
data_fitted = twoD_Gaussian((x, y), *popt)
fig, ax = plt.subplots(1, 1)
ax.hold(True)
data_noisy.shape = (32, 32)
data.shape = (32, 32)
img = ax.matshow(data_noisy.reshape(32, 32), origin='bottom',
extent=(x.min(), x.max(), y.min(), y.max()))
ax.contour(x, y, data_fitted.reshape(32, 32), 8, colors='w')
colorbar(img)
def _general_function_mle(params, xdata, ydata, function):
# calculate the function
f = function(xdata, *params)
# calculate the MLE version of chi2
chi2 = 2*(f - ydata - ydata * np.log(f/ydata))
# return the sqrt because the np.leastsq will square and sum the result
if chi2.min() < 0:
return nan_to_num(inf)*ones_like(chi2)
else:
return np.sqrt(chi2)
def _weighted_general_function_mle(params, xdata, ydata, function, weights):
return weights * (_general_function_mle(params, xdata, ydata, function))
def _general_function_ls(params, xdata, ydata, function):
return function(xdata, *params) - ydata
def _weighted_general_function_ls(params, xdata, ydata, function, weights):
return weights * _general_function_ls(params, xdata, ydata, function)
##Here's my modifcation to the above code
#define model function and pass independant variables x and y as a list
def gaussian2D(xdata_tuple, amp, x0, y0, sigma_x, sigma_y, offset):
(x, y) = xdata_tuple
g = offset + amp*exp( -((x-x0)**2/(2*sigma_x**2)+(y-y0)**2/(2*sigma_y**2)))
return g
#create a wrapper function
def gaussian2D_fit(*args):
return gaussian2D(*args).ravel()
def gaussian2D_sym(xdata_tuple, amp, x0, y0, sigma_x, offset):
(x, y) = xdata_tuple
g = offset + amp*exp( -((x-x0)**2+(y-y0)**2)/(2*sigma_x**2))
return g
#create a wrapper function
def gaussian2D_sym_fit(*args):
return gaussian2D_sym(*args).ravel()
# # Create x and y indices
# x = arange(32)
# y = arange(32)
# x, y = np.meshgrid(x, y)
# #create data
# real_params = array([10, 14, 17, 2, 4, 0])
# data = gaussian2D((x, y), *real_params)
# plot twoD_Gaussian data generated above
plt.figure()
plt.matshow(data,origin='bottom')
plt.colorbar()
# add some noise to the data and try to fit the data generated beforehand
initial_guess = (10,12,15,5,7,8)
data_noisy = data + random.poisson(4, data.shape)
mp._general_function = _general_function_mle
mp._weighted_general_function = _weighted_general_function_mle
popt_mle, pcov_mle = mp.curve_fit(gaussian2D_fit, (x, y), data_noisy.ravel(), p0=initial_guess)
mp._general_function = _general_function_ls
mp._weighted_general_function = _weighted_general_function_ls
popt, pcov = mp.curve_fit(gaussian2D_fit, (x, y), data_noisy.ravel(), p0=initial_guess)
#And plot the results:
data_fitted = gaussian2D((x, y), *popt)
fig, ax = plt.subplots(1, 1)
ax.hold(True)
ax.matshow(data_noisy, origin='bottom', extent=(x.min(), x.max(), y.min(), y.max()))
ax.contour(x, y, data_fitted, 8, colors='w')
print(popt_mle)
print(popt)
#[ 2.97066005 31.99547047 31.96779469 4.97361061 9.97955038 1.20776079] | notebooks/CurveFitTest.ipynb | david-hoffman/scripts | apache-2.0 |
With my method of using a wrapper function to prepare the data for fitting I avoid the need to reshape the data for plotting and later analysis, but it means I need to ravel the y-data. | # define jacobians for timing experiments.
def myDfun( params, xdata, ydata, f):
x = xdata[0].ravel()
y = xdata[1].ravel()
amp, x0, y0, sigma_x, sigma_y, offset = params
value = f(xdata, *params)-offset
dydamp = value/amp
dydx0 = value*(x-x0)/sigma_x**2
dydsigmax = value*(x-x0)**2/sigma_x**3
dydy0 = value*(y-y0)/sigma_y**2
dydsigmay = value*(y-y0)**2/sigma_y**3
return vstack((dydamp, dydx0, dydy0, dydsigmax, dydsigmay, ones_like(value)))
def myDfun_sym( params, xdata, ydata, f):
x = xdata[0].ravel()
y = xdata[1].ravel()
amp, x0, y0, sigma_x, offset = params
value = f(xdata, *params)-offset
dydamp = value/amp
dydx0 = value*(x-x0)/sigma_x**2
dydsigmax = value*(x-x0)**2/sigma_x**3
dydy0 = value*(y-y0)/sigma_x**2
return vstack((dydamp, dydx0, dydy0, dydsigmax, ones_like(value)))
myDfun(popt, (x, y), data_noisy.ravel(), gaussian2D_fit)[0].shape | notebooks/CurveFitTest.ipynb | david-hoffman/scripts | apache-2.0 |
Testing timing
Comparing using a Jacobian vs not using one | # With MLE fitting
mp._general_function = _general_function_mle
mp._weighted_general_function = _weighted_general_function_mle
%timeit popt, pcov = mp.curve_fit(gaussian2D_fit, (x, y), data_noisy.ravel(), p0=initial_guess, Dfun=myDfun, col_deriv=1)
%timeit popt, pcov = mp.curve_fit(gaussian2D_fit, (x, y), data_noisy.ravel(), p0=initial_guess)
# With least squares fitting for a non-symmetric model function
mp._general_function = _general_function_ls
mp._weighted_general_function = _weighted_general_function_ls
%timeit popt, pcov = mp.curve_fit(gaussian2D_fit, (x, y), data_noisy.ravel(), p0=initial_guess, Dfun=myDfun, col_deriv=1)
%timeit popt, pcov = mp.curve_fit(gaussian2D_fit, (x, y), data_noisy.ravel(), p0=initial_guess)
# With least squares for a symmetric model function
mp._general_function = _general_function_ls
mp._weighted_general_function = _weighted_general_function_ls
initial_guess2 = (10,12,15,5,8)
%timeit popt, pcov = mp.curve_fit(gaussian2D_sym_fit, (x, y), data_noisy.ravel(), p0=initial_guess2, Dfun=myDfun_sym, col_deriv=1)
%timeit popt, pcov = mp.curve_fit(gaussian2D_sym_fit, (x, y), data_noisy.ravel(), p0=initial_guess2)
# Testing my class, notice its slower, but there's a lot more
junk_g = Gauss2D(data_noisy)
junk_g.optimize_params(modeltype='full')
%timeit junk_g.optimize_params(junk_g.guess_params)
fig, ax = plt.subplots(1, 1)
ax.hold(True)
ax.matshow(junk_g.data, origin='bottom')
(y, x) = indices(junk_g.data.shape)
ax.contour(x, y, junk_g.fit_model, 8, colors='w')
ax.contour(x, y, data, 8, colors='r')
junk_g.optimize_params(modeltype='norot')
%timeit junk_g.optimize_params(junk_g.guess_params)
fig, ax = plt.subplots(1, 1)
ax.hold(True)
ax.matshow(junk_g.data, origin='bottom')
(y, x) = indices(junk_g.data.shape)
ax.contour(x, y, junk_g.fit_model, 8, colors='w')
ax.contour(x, y, data, 8, colors='r')
junk_g.optimize_params(modeltype='sym')
%timeit junk_g.optimize_params(junk_g.guess_params)
fig, ax = plt.subplots(1, 1)
ax.hold(True)
ax.matshow(junk_g.data, origin='bottom')
(y, x) = indices(junk_g.data.shape)
ax.contour(x, y, junk_g.fit_model, 8, colors='w')
ax.contour(x, y, data, 8, colors='r') | notebooks/CurveFitTest.ipynb | david-hoffman/scripts | apache-2.0 |
Lorentzians | ##Here's my modifcation to the above code
#define model function and pass independant variables x and y as a list
def lor2D(xdata_tuple, amp, x0, y0, sigma_x, sigma_y, offset):
(x, y) = xdata_tuple
g = offset + amp/(1+((x-x0)/(sigma_x/2))**2)/(1+((y-y0)/(sigma_y/2))**2)
return g
#create a wrapper function
def lor2D_fit(xdata_tuple, amp, x0, y0, sigma_x, sigma_y, offset):
return gaussian2D(xdata_tuple, amp, x0, y0, sigma_x, sigma_y, offset).ravel()
# Create x and y indices
x = arange(64)
y = arange(64)
x, y = np.meshgrid(x, y)
#create data
data = lor2D((x, y), 3, 32, 32, 5, 10, 10)
# plot twoD_Gaussian data generated above
plt.figure()
plt.matshow(data,origin='bottom')
plt.colorbar()
# add some noise to the data and try to fit the data generated beforehand
initial_guess = (4,0,30,25,35,8)
data_noisy = data + 0.2*randn(*data.shape)
popt, pcov = curve_fit(lor2D_fit, (x, y), data_noisy.ravel(), p0=initial_guess)
#And plot the results:
data_fitted = lor2D((x, y), *popt)
fig, ax = plt.subplots(1, 1)
ax.hold(True)
ax.matshow(data_noisy, origin='bottom', extent=(x.min(), x.max(), y.min(), y.max()))
ax.contour(x, y, data_fitted, 8, colors='w') | notebooks/CurveFitTest.ipynb | david-hoffman/scripts | apache-2.0 |
Implementing an SLR-Table-Generator
A Grammar for Grammars
As the goal is to generate an SLR-table-generator we first need to implement a parser for context free grammars.
The file simple.g contains an example grammar that describes arithmetic expressions. | !cat Examples/c-grammar.g | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
We use <span style="font-variant:small-caps;">Antlr</span> to develop a parser for context free grammars. The pure grammar used to parse context free grammars is stored in the file Pure.g4. It is similar to the grammar that we have already used to implement Earley's algorithm, but allows additionally the use of the operator |, so that all grammar rules that define a variable can be combined in one rule. | !cat Pure.g4 | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
The annotated grammar is stored in the file Grammar.g4.
The parser will return a list of grammar rules, where each rule of the form
$$ a \rightarrow \beta $$
is stored as the tuple (a,) + 𝛽. | !cat -n Grammar.g4 | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
The Class GrammarRule
The class GrammarRule is used to store a single grammar rule. As we have to use objects of type GrammarRule as keys in a dictionary later, we have to provide the methods __eq__, __ne__, and __hash__. | class GrammarRule:
def __init__(self, variable, body):
self.mVariable = variable
self.mBody = body
def __eq__(self, other):
return isinstance(other, GrammarRule) and \
self.mVariable == other.mVariable and \
self.mBody == other.mBody
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.__repr__())
def __repr__(self):
return f'{self.mVariable} → {" ".join(self.mBody)}' | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
The function parse_grammar takes a string filename as its argument and returns the grammar that is stored in the specified file. The grammar is represented as list of rules. Each rule is represented as a tuple. The example below will clarify this structure. | def parse_grammar(filename):
input_stream = antlr4.FileStream(filename, encoding="utf-8")
lexer = GrammarLexer(input_stream)
token_stream = antlr4.CommonTokenStream(lexer)
parser = GrammarParser(token_stream)
grammar = parser.start()
return [GrammarRule(head, tuple(body)) for head, *body in grammar.g]
grammar = parse_grammar('Examples/c-grammar.g')
grammar | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Given a string name, which is either a variable, a token, or a literal, the function is_var checks whether name is a variable. The function can distinguish variable names from tokens and literals because variable names consist only of lower case letters, while tokens are all uppercase and literals start with the character "'". | def is_var(name):
return name[0] != "'" and name.islower() | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Fun Fact: The invocation of "'return'".islower() returns True. This is the reason that we have to test that
name does not start with a "'" character because otherwise keywords like 'return' or 'while' appearing in a grammar would be mistaken for variables. | "'return'".islower() | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Given a list Rules of GrammarRules, the function collect_variables(Rules) returns the set of all variables occuring in Rules. | def collect_variables(Rules):
Variables = set()
for rule in Rules:
Variables.add(rule.mVariable)
for item in rule.mBody:
if is_var(item):
Variables.add(item)
return Variables | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Given a set Rules of GrammarRules, the function collect_tokens(Rules) returns the set of all tokens and literals occuring in Rules. | def collect_tokens(Rules):
Tokens = set()
for rule in Rules:
for item in rule.mBody:
if not is_var(item):
Tokens.add(item)
return Tokens | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Marked Rules
The class MarkedRule stores a single marked rule of the form
$$ v \rightarrow \alpha \bullet \beta $$
where the variable $v$ is stored in the member variable mVariable, while $\alpha$ and $\beta$ are stored in the variables mAlphaand mBeta respectively. These variables are assumed to contain tuples of grammar symbols. A grammar symbol is either
- a variable,
- a token, or
- a literal, i.e. a string enclosed in single quotes.
Later, we need to maintain sets of marked rules to represent states. Therefore, we have to define the methods __eq__, __ne__, and __hash__. | class MarkedRule():
def __init__(self, variable, alpha, beta):
self.mVariable = variable
self.mAlpha = alpha
self.mBeta = beta
def __eq__(self, other):
return isinstance(other, MarkedRule) and \
self.mVariable == other.mVariable and \
self.mAlpha == other.mAlpha and \
self.mBeta == other.mBeta
def __ne__(self, other):
return not self.__eq__(other)
def __hash__(self):
return hash(self.__repr__())
def __repr__(self):
alphaStr = ' '.join(self.mAlpha)
betaStr = ' '.join(self.mBeta)
return f'{self.mVariable} → {alphaStr} • {betaStr}' | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Given a marked rule self, the function is_complete checks, whether the marked rule self has the form
$$ c \rightarrow \alpha\; \bullet,$$
i.e. it checks, whether the $\bullet$ is at the end of the grammar rule. | def is_complete(self):
return len(self.mBeta) == 0
MarkedRule.is_complete = is_complete
del is_complete | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Given a marked rule self of the form
$$ c \rightarrow \alpha \bullet X\, \delta, $$
the function symbol_after_dot returns the symbol $X$. If there is no symbol after the $\bullet$, the method returns None. | def symbol_after_dot(self):
if len(self.mBeta) > 0:
return self.mBeta[0]
return None
MarkedRule.symbol_after_dot = symbol_after_dot
del symbol_after_dot | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Given a marked rule, this function returns the variable following the dot. If there is no variable following the dot, the function returns None. | def next_var(self):
if len(self.mBeta) > 0:
var = self.mBeta[0]
if is_var(var):
return var
return None
MarkedRule.next_var = next_var
del next_var | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
The function move_dot(self) transforms a marked rule of the form
$$ c \rightarrow \alpha \bullet X\, \beta $$
into a marked rule of the form
$$ c \rightarrow \alpha\, X \bullet \beta, $$
i.e. the $\bullet$ is moved over the next symbol. Invocation of this method assumes that there is a symbol
following the $\bullet$. | def move_dot(self):
return MarkedRule(self.mVariable,
self.mAlpha + (self.mBeta[0],),
self.mBeta[1:])
MarkedRule.move_dot = move_dot
del move_dot | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
The function to_rule(self) turns the marked rule self into a GrammarRule, i.e. the marked rule
$$ c \rightarrow \alpha \bullet \beta $$
is turned into the grammar rule
$$ c \rightarrow \alpha\, \beta. $$ | def to_rule(self):
return GrammarRule(self.mVariable, self.mAlpha + self.mBeta)
MarkedRule.to_rule = to_rule
del to_rule | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
SLR-Table-Generation
The class Grammar represents a context free grammar. It stores a list of the GrammarRules of the given grammar.
Each grammar rule is of the form
$$ a \rightarrow \beta $$
where $\beta$ is a tuple of variables, tokens, and literals.
The start symbol is assumed to be the variable on the left hand side of the first rule. The grammar is augmented with the rule
$$ \widehat{s} \rightarrow s\, \$. $$
Here $s$ is the start variable of the given grammar and $\widehat{s}$ is a new variable that is the start variable of the augmented grammar. The symbol $ denotes the end of input. The non-obvious member variables of the class Grammar have the following interpretation
- mStates is the set of all states of the SLR-parser. These states are sets of marked rules.
- mStateNamesis a dictionary assigning names of the form s0, s1, $\cdots$, sn to the states stored in
mStates. The functions action and goto will be defined for state names, not for states, because
otherwise the table representing these functions would become both huge and unreadable.
- mConflicts is a Boolean variable that will be set to true if the table generation discovers
shift/reduce conflicts or reduce/reduce conflicts. | class Grammar():
def __init__(self, Rules):
self.mRules = Rules
self.mStart = Rules[0].mVariable
self.mVariables = collect_variables(Rules)
self.mTokens = collect_tokens(Rules)
self.mStates = set()
self.mStateNames = {}
self.mConflicts = False
self.mVariables.add('ŝ')
self.mTokens.add('$')
self.mRules.append(GrammarRule('ŝ', (self.mStart, '$'))) # augmenting
self.compute_tables() | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Given a set of Variables, the function initialize_dictionary returns a dictionary that assigns the empty set to all variables. | def initialize_dictionary(Variables):
return { a: set() for a in Variables } | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Given a Grammar, the function compute_tables computes
- the sets First(v) and Follow(v) for every variable v,
- the set of all states of the SLR-Parser,
- the action table, and
- the goto table.
Given a grammar g,
- the set g.mFirst is a dictionary such that g.mFirst[a] = First[a] and
- the set g.mFollow is a dictionary such that g.mFollow[a] = Follow[a] for all variables a. | def compute_tables(self):
self.mFirst = initialize_dictionary(self.mVariables)
self.mFollow = initialize_dictionary(self.mVariables)
self.compute_first()
self.compute_follow()
self.compute_rule_names()
self.all_states()
self.compute_action_table()
self.compute_goto_table()
Grammar.compute_tables = compute_tables
del compute_tables | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
The function compute_rule_names assigns a unique name to each rule of the grammar. These names are used later
to represent reduce actions in the action table. | def compute_rule_names(self):
self.mRuleNames = {}
counter = 0
for rule in self.mRules:
self.mRuleNames[rule] = 'r' + str(counter)
counter += 1
Grammar.compute_rule_names = compute_rule_names
del compute_rule_names | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
The function compute_first(self) computes the sets $\texttt{First}(c)$ for all variables $c$ and stores them in the dictionary mFirst. Abstractly, given a variable $c$ the function $\texttt{First}(c)$ is the set of all tokens that can start a string that is derived from $c$:
$$\texttt{First}(\texttt{c}) :=
\Bigl{ t \in T \Bigm| \exists \gamma \in (V \cup T)^: \texttt{c} \Rightarrow^ t\,\gamma \Bigr}.
$$
The definition of the function $\texttt{First}()$ is extended to strings from $(V \cup T)^$ as follows:
- $\texttt{FirstList}(\varepsilon) = {}$.
- $\texttt{FirstList}(t \beta) = { t }$ if $t \in T$.
- $\texttt{FirstList}(\texttt{a} \beta) = \left{
\begin{array}[c]{ll}
\texttt{First}(\texttt{a}) \cup \texttt{FirstList}(\beta) & \mbox{if $\texttt{a} \Rightarrow^ \varepsilon$;} \
\texttt{First}(\texttt{a}) & \mbox{otherwise.}
\end{array}
\right.
$
If $\texttt{a}$ is a variable of $G$ and the rules defining $\texttt{a}$ are given as
$$\texttt{a} \rightarrow \alpha_1 \mid \cdots \mid \alpha_n, $$
then we have
$$\texttt{First}(\texttt{a}) = \bigcup\limits_{i=1}^n \texttt{FirstList}(\alpha_i). $$
The dictionary mFirst that stores this function is computed via a fixed point iteration. | def compute_first(self):
change = True
while change:
change = False
for rule in self.mRules:
a, body = rule.mVariable, rule.mBody
first_body = self.first_list(body)
if not (first_body <= self.mFirst[a]):
change = True
self.mFirst[a] |= first_body
print('First sets:')
for v in self.mVariables:
print(f'First({v}) = {self.mFirst[v]}')
Grammar.compute_first = compute_first
del compute_first | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Given a tuple of variables and tokens alpha, the function first_list(alpha) computes the function $\texttt{FirstList}(\alpha)$ that has been defined above. If alpha is nullable, then the result will contain the empty string $\varepsilon = \texttt{''}$. | def first_list(self, alpha):
if len(alpha) == 0:
return { '' }
elif is_var(alpha[0]):
v, *r = alpha
return eps_union(self.mFirst[v], self.first_list(r))
else:
t = alpha[0]
return { t }
Grammar.first_list = first_list
del first_list | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
The arguments S and T of eps_union are sets that contain tokens and, additionally, they might contain the empty string. | def eps_union(S, T):
if '' in S:
if '' in T:
return S | T
return (S - { '' }) | T
return S | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Given an augmented grammar $G = \langle V,T,R\cup{\widehat{s} \rightarrow s\,\$}, \widehat{s}\rangle$
and a variable $a$, the set of tokens that might follow $a$ is defined as:
$$\texttt{Follow}(a) :=
\bigl{ t \in \widehat{T} \,\bigm|\, \exists \beta,\gamma \in (V \cup \widehat{T})^:
\widehat{s} \Rightarrow^ \beta \,a\, t\, \gamma
\bigr}.
$$
The function compute_follow computes the sets $\texttt{Follow}(a)$ for all variables $a$ via a fixed-point iteration. | def compute_follow(self):
self.mFollow[self.mStart] = { '$' }
change = True
while change:
change = False
for rule in self.mRules:
a, body = rule.mVariable, rule.mBody
for i in range(len(body)):
if is_var(body[i]):
yi = body[i]
Tail = self.first_list(body[i+1:])
firstTail = eps_union(Tail, self.mFollow[a])
if not (firstTail <= self.mFollow[yi]):
change = True
self.mFollow[yi] |= firstTail
print('Follow sets (note that "$" denotes the end of file):');
for v in self.mVariables:
print(f'Follow({v}) = {self.mFollow[v]}')
Grammar.compute_follow = compute_follow
del compute_follow | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
If $\mathcal{M}$ is a set of marked rules, then the closure of $\mathcal{M}$ is the smallest set $\mathcal{K}$ such that
we have the following:
- $\mathcal{M} \subseteq \mathcal{K}$,
- If $a \rightarrow \beta \bullet c\, \delta$ is a marked rule from
$\mathcal{K}$, and $c$ is a variable and if, furthermore,
$c \rightarrow \gamma$ is a grammar rule,
then the marked rule $c \rightarrow \bullet \gamma$
is an element of $\mathcal{K}$:
$$(a \rightarrow \beta \bullet c\, \delta) \in \mathcal{K}
\;\wedge\;
(c \rightarrow \gamma) \in R
\;\Rightarrow\; (c \rightarrow \bullet \gamma) \in \mathcal{K}
$$
We define $\texttt{closure}(\mathcal{M}) := \mathcal{K}$. The function cmp_closure computes this closure for a given set of marked rules via a fixed-point iteration. | def cmp_closure(self, Marked_Rules):
All_Rules = Marked_Rules
New_Rules = Marked_Rules
while True:
More_Rules = set()
for rule in New_Rules:
c = rule.next_var()
if c == None:
continue
for rule in self.mRules:
head, alpha = rule.mVariable, rule.mBody
if c == head:
More_Rules |= { MarkedRule(head, (), alpha) }
if More_Rules <= All_Rules:
return frozenset(All_Rules)
New_Rules = More_Rules - All_Rules
All_Rules |= New_Rules
Grammar.cmp_closure = cmp_closure
del cmp_closure | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Given a set of marked rules $\mathcal{M}$ and a grammar symbol $X$, the function $\texttt{goto}(\mathcal{M}, X)$
is defined as follows:
$$\texttt{goto}(\mathcal{M}, X) := \texttt{closure}\Bigl( \bigl{
a \rightarrow \beta\, X \bullet \delta \bigm| (a \rightarrow \beta \bullet X\, \delta) \in \mathcal{M}
\bigr} \Bigr).
$$ | def goto(self, Marked_Rules, x):
Result = set()
for mr in Marked_Rules:
if mr.symbol_after_dot() == x:
Result.add(mr.move_dot())
return self.cmp_closure(Result)
Grammar.goto = goto
del goto | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
The function all_states computes the set of all states of an SLR-parser. The function starts with the state
$$ \texttt{closure}\bigl({ \widehat{s} \rightarrow \bullet s \, $}\bigr) $$
and then tries to compute new states by using the function goto. This computation proceeds via a
fixed-point iteration. Once all states have been computed, the function assigns names to these states.
This association is stored in the dictionary mStateNames. | def all_states(self):
start_state = self.cmp_closure({ MarkedRule('ŝ', (), (self.mStart, '$')) })
self.mStates = { start_state }
New_States = self.mStates
while True:
More_States = set()
for Rule_Set in New_States:
for mr in Rule_Set:
if not mr.is_complete():
x = mr.symbol_after_dot()
if x != '$':
More_States |= { self.goto(Rule_Set, x) }
if More_States <= self.mStates:
break
New_States = More_States - self.mStates;
self.mStates |= New_States
print("All SLR-states:")
counter = 1
self.mStateNames[start_state] = 's0'
print(f's0 = {set(start_state)}')
for state in self.mStates - { start_state }:
self.mStateNames[state] = f's{counter}'
print(f's{counter} = {set(state)}')
counter += 1
Grammar.all_states = all_states
del all_states | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
The following function computes the action table and is defined as follows:
- If $\mathcal{M}$ contains a marked rule of the form $a \rightarrow \beta \bullet t\, \delta$
then we have
$$\texttt{action}(\mathcal{M},t) := \langle \texttt{shift}, \texttt{goto}(\mathcal{M},t) \rangle.$$
- If $\mathcal{M}$ contains a marked rule of the form $a \rightarrow \beta\, \bullet$ and we have
$t \in \texttt{Follow}(a)$, then we define
$$\texttt{action}(\mathcal{M},t) := \langle \texttt{reduce}, a \rightarrow \beta \rangle$$
- If $\mathcal{M}$ contains the marked rule $\widehat{s} \rightarrow s \bullet \$ $, then we define
$$\texttt{action}(\mathcal{M},\$) := \texttt{accept}. $$
- Otherwise, we have
$$\texttt{action}(\mathcal{M},t) := \texttt{error}. $$ | def compute_action_table(self):
self.mActionTable = {}
print('\nAction Table:')
for state in self.mStates:
stateName = self.mStateNames[state]
actionTable = {}
# compute shift actions
for token in self.mTokens:
if token != '$':
newState = self.goto(state, token)
if newState != set():
newName = self.mStateNames[newState]
actionTable[token] = ('shift', newName)
self.mActionTable[stateName, token] = ('shift', newName)
print(f'action("{stateName}", {token}) = ("shift", {newName})')
# compute reduce actions
for mr in state:
if mr.is_complete():
for token in self.mFollow[mr.mVariable]:
action1 = actionTable.get(token)
action2 = ('reduce', mr.to_rule())
if action1 == None:
actionTable[token] = action2
r = self.mRuleNames[mr.to_rule()]
self.mActionTable[stateName, token] = ('reduce', r)
print(f'action("{stateName}", {token}) = {action2}')
elif action1 != action2:
self.mConflicts = True
print('')
print(f'conflict in state {stateName}:')
print(f'{stateName} = {state}')
print(f'action("{stateName}", {token}) = {action1}')
print(f'action("{stateName}", {token}) = {action2}')
print('')
for mr in state:
if mr == MarkedRule('ŝ', (self.mStart,), ('$',)):
actionTable['$'] = 'accept'
self.mActionTable[stateName, '$'] = 'accept'
print(f'action("{stateName}", $) = accept')
Grammar.compute_action_table = compute_action_table
del compute_action_table | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
The function compute_goto_table computes the goto table. | def compute_goto_table(self):
self.mGotoTable = {}
print('\nGoto Table:')
for state in self.mStates:
for var in self.mVariables:
newState = self.goto(state, var)
if newState != set():
stateName = self.mStateNames[state]
newName = self.mStateNames[newState]
self.mGotoTable[stateName, var] = newName
print(f'goto({stateName}, {var}) = {newName}')
Grammar.compute_goto_table = compute_goto_table
del compute_goto_table
%%time
g = Grammar(grammar)
def strip_quotes(t):
if t[0] == "'" and t[-1] == "'":
return t[1:-1]
return t
def dump_parse_table(self, file):
with open(file, 'w') as handle:
handle.write('# Grammar rules:\n')
for rule in self.mRules:
rule_name = self.mRuleNames[rule]
handle.write(f'{rule_name} = ("{rule.mVariable}", {rule.mBody})\n')
handle.write('\n# Action table:\n')
handle.write('actionTable = {}\n')
for s, t in self.mActionTable:
action = self.mActionTable[s, t]
t = strip_quotes(t)
if action[0] == 'reduce':
rule_name = action[1]
handle.write(f"actionTable['{s}', '{t}'] = ('reduce', {rule_name})\n")
elif action == 'accept':
handle.write(f"actionTable['{s}', '{t}'] = 'accept'\n")
else:
handle.write(f"actionTable['{s}', '{t}'] = {action}\n")
handle.write('\n# Goto table:\n')
handle.write('gotoTable = {}\n')
for s, v in self.mGotoTable:
state = self.mGotoTable[s, v]
handle.write(f"gotoTable['{s}', '{v}'] = '{state}'\n")
Grammar.dump_parse_table = dump_parse_table
del dump_parse_table
g.dump_parse_table('parse-table.py')
!cat parse-table.py
!rm GrammarLexer.* GrammarParser.* Grammar.tokens GrammarListener.py Grammar.interp
!rm -r __pycache__
!ls | ANTLR4-Python/SLR-Parser-Generator/SLR-Table-Generator.ipynb | Danghor/Formal-Languages | gpl-2.0 |
Let's start with a showcase
Case study: air quality in Europe
AirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from Europe
Starting from these hourly data for different stations: | data = pd.read_csv('data/airbase_data.csv', index_col=0, parse_dates=True, na_values='-9999')
data | 01 - Introduction.ipynb | btel/2015_eitn_swc_pandas | bsd-2-clause |
to answering questions about this data in a few lines of code:
Does the air pollution show a decreasing trend over the years? | data['1999':].resample('A').plot(ylim=[0,100]) | 01 - Introduction.ipynb | btel/2015_eitn_swc_pandas | bsd-2-clause |
How many exceedances of the limit values? | exceedances = data > 200
exceedances = exceedances.groupby(exceedances.index.year).sum()
ax = exceedances.loc[2005:].plot(kind='bar')
ax.axhline(18, color='k', linestyle='--') | 01 - Introduction.ipynb | btel/2015_eitn_swc_pandas | bsd-2-clause |
What is the difference in diurnal profile between weekdays and weekend? | data['weekday'] = data.index.weekday
data['weekend'] = data['weekday'].isin([5, 6])
data_weekend = data.groupby(['weekend', data.index.hour])['FR04012'].mean().unstack(level=0)
data_weekend.plot() | 01 - Introduction.ipynb | btel/2015_eitn_swc_pandas | bsd-2-clause |
Adding Polynomials
When you add two polynomials, the result is a polynomial. Here's an example:
\begin{equation}(3x^{3} - 4x + 5) + (2x^{3} + 3x^{2} - 2x + 2) \end{equation}
because this is an addition operation, you can simply add all of the like terms from both polynomials. To make this clear, let's first put the like terms together:
\begin{equation}3x^{3} + 2x^{3} + 3x^{2} - 4x -2x + 5 + 2 \end{equation}
This simplifies to:
\begin{equation}5x^{3} + 3x^{2} - 6x + 7 \end{equation}
We can verify this with Python: | from random import randint
x = randint(1,100)
(3*x**3 - 4*x + 5) + (2*x**3 + 3*x**2 - 2*x + 2) == 5*x**3 + 3*x**2 - 6*x + 7 | courses/DAT256x/Module01/01-05-Polynomials.ipynb | alexandrnikitin/algorithm-sandbox | mit |
Subtracting Polynomials
Subtracting polynomials is similar to adding them but you need to take into account that one of the polynomials is a negative.
Consider this expression:
\begin{equation}(2x^{2} - 4x + 5) - (x^{2} - 2x + 2) \end{equation}
The key to performing this calculation is to realize that the subtraction of the second polynomial is really an expression that adds -1(x<sup>2</sup> - 2x + 2); so you can use the distributive property to multiply each of the terms in the polynomial by -1 (which in effect simply reverses the sign for each term). So our expression becomes:
\begin{equation}(2x^{2} - 4x + 5) + (-x^{2} + 2x - 2) \end{equation}
Which we can solve as an addition problem. First place the like terms together:
\begin{equation}2x^{2} + -x^{2} + -4x + 2x + 5 + -2 \end{equation}
Which simplifies to:
\begin{equation}x^{2} - 2x + 3 \end{equation}
Let's check that with Python: | from random import randint
x = randint(1,100)
(2*x**2 - 4*x + 5) - (x**2 - 2*x + 2) == x**2 - 2*x + 3 | courses/DAT256x/Module01/01-05-Polynomials.ipynb | alexandrnikitin/algorithm-sandbox | mit |
Multiplying Polynomials
To multiply two polynomials, you need to perform the following two steps:
1. Multiply each term in the first polynomial by each term in the second polynomial.
2. Add the results of the multiplication operations, combining like terms where possible.
For example, consider this expression:
\begin{equation}(x^{4} + 2)(2x^{2} + 3x - 3) \end{equation}
Let's do the first step and multiply each term in the first polynomial by each term in the second polynomial. The first term in the first polynomial is x<sup>4</sup>, and the first term in the second polynomial is 2x<sup>2</sup>, so multiplying these gives us 2x<sup>6</sup>. Then we can multiply the first term in the first polynomial (x<sup>4</sup>) by the second term in the second polynomial (3x), which gives us 3x<sup>5</sup>, and so on until we've multipled all of the terms in the first polynomial by all of the terms in the second polynomial, which results in this:
\begin{equation}2x^{6} + 3x^{5} - 3x^{4} + 4x^{2} + 6x - 6 \end{equation}
We can verify a match between this result and the original expression this with the following Python code: | from random import randint
x = randint(1,100)
(x**4 + 2)*(2*x**2 + 3*x - 3) == 2*x**6 + 3*x**5 - 3*x**4 + 4*x**2 + 6*x - 6 | courses/DAT256x/Module01/01-05-Polynomials.ipynb | alexandrnikitin/algorithm-sandbox | mit |
Dividing Polynomials
When you need to divide one polynomial by another, there are two approaches you can take depending on the number of terms in the divisor (the expression you're dividing by).
Dividing Polynomials Using Simplification
In the simplest case, division of a polynomial by a monomial, the operation is really just simplification of a fraction.
For example, consider the following expression:
\begin{equation}(4x + 6x^{2}) \div 2x \end{equation}
This can also be written as:
\begin{equation}\frac{4x + 6x^{2}}{2x} \end{equation}
One approach to simplifying this fraction is to split it it into a separate fraction for each term in the dividend (the expression we're dividing), like this:
\begin{equation}\frac{4x}{2x} + \frac{6x^{2}}{2x}\end{equation}
Then we can simplify each fraction and add the results. For the first fraction, 2x goes into 4x twice, so the fraction simplifies to 2; and for the second, 6x<sup>2</sup> is 2x mutliplied by 3x. So our answer is 2 + 3x:
\begin{equation}2 + 3x\end{equation}
Let's use Python to compare the original fraction with the simplified result for an arbitrary value of x: | from random import randint
x = randint(1,100)
(4*x + 6*x**2) / (2*x) == 2 + 3*x | courses/DAT256x/Module01/01-05-Polynomials.ipynb | alexandrnikitin/algorithm-sandbox | mit |
Dividing Polynomials Using Long Division
Things get a little more complicated for divisors with more than one term.
Suppose we have the following expression:
\begin{equation}(x^{2} + 2x - 3) \div (x - 2) \end{equation}
Another way of writing this is to use the long-division format, like this:
\begin{equation} x - 2 |\overline{x^{2} + 2x - 3} \end{equation}
We begin long-division by dividing the highest order divisor into the highest order dividend - so in this case we divide x into x<sup>2</sup>. X goes into x<sup>2</sup> x times, so we put an x on top and then multiply it through the divisor:
\begin{equation} \;\;\;\;x \end{equation}
\begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation}
\begin{equation} \;x^{2} -2x \end{equation}
Now we'll subtract the remaining dividend, and then carry down the -3 that we haven't used to see what's left:
\begin{equation} \;\;\;\;x \end{equation}
\begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation}
\begin{equation}- (x^{2} -2x) \end{equation}
\begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;4x -3} \end{equation}
OK, now we'll divide our highest order divisor into the highest order of the remaining dividend. In this case, x goes into 4x four times, so we'll add a 4 to the top line, multiply it through the divisor, and subtract the remaining dividend:
\begin{equation} \;\;\;\;\;\;\;\;x + 4 \end{equation}
\begin{equation}x - 2 |\overline{x^{2} + 2x - 3} \end{equation}
\begin{equation}- (x^{2} -2x) \end{equation}
\begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;4x -3} \end{equation}
\begin{equation}- (\;\;\;\;\;\;\;\;\;\;\;\;4x -8) \end{equation}
\begin{equation}\;\;\;\;\;\overline{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;5} \end{equation}
We're now left with just 5, which we can't divide further by x - 2; so that's our remainder, which we'll add as a fraction.
The solution to our division problem is:
\begin{equation}x + 4 + \frac{5}{x-2} \end{equation}
Once again, we can use Python to check our answer: | from random import randint
x = randint(3,100)
(x**2 + 2*x -3)/(x-2) == x + 4 + (5/(x-2))
| courses/DAT256x/Module01/01-05-Polynomials.ipynb | alexandrnikitin/algorithm-sandbox | mit |
ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict) | !pip install scikit-image
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)] | transfer-learning/Transfer_Learning.ipynb | Lstyle1/Deep_learning_projects | mit |
Below I'm running images through the VGG network in batches.
Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values). | # Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
#Test
print(labels[:10])
print(codes[:10])
print(codes.shape) | transfer-learning/Transfer_Learning.ipynb | Lstyle1/Deep_learning_projects | mit |
Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work. | # read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
#Test
print(labels[:10])
print(codes[:10])
print(codes.shape) | transfer-learning/Transfer_Learning.ipynb | Lstyle1/Deep_learning_projects | mit |
Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels. | from sklearn import preprocessing
label_binarizer = preprocessing.LabelBinarizer()
label_binarizer.fit(classes)
labels_vecs = label_binarizer.transform(labels) # Your one-hot encoded labels array here
#Test
label_binarizer.classes_
print(labels_vecs[:5]) | transfer-learning/Transfer_Learning.ipynb | Lstyle1/Deep_learning_projects | mit |
Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets. | from sklearn.model_selection import StratifiedShuffleSplit
#shufflesplitter for train and test(valid)
shuffle_train_test = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
shuffle_test_valid = StratifiedShuffleSplit(n_splits=1, test_size=0.5)
train_index, test_valid_index = next(shuffle_train_test.split(codes, labels_vecs))
test_index, valid_index = next(shuffle_test_valid.split(codes[test_valid_index], labels_vecs[test_valid_index]))
train_x, train_y = codes[train_index], labels_vecs[train_index]
val_x, val_y = codes[valid_index], labels_vecs[valid_index]
test_x, test_y = codes[test_index], labels_vecs[test_index]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape) | transfer-learning/Transfer_Learning.ipynb | Lstyle1/Deep_learning_projects | mit |
If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost. | inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
fully_layer = tf.contrib.layers.fully_connected(inputs=inputs_,\
num_outputs=256,\
weights_initializer=tf.truncated_normal_initializer(stddev=0.1))
logits = tf.contrib.layers.fully_connected(inputs=fully_layer,\
num_outputs=len(classes),\
activation_fn=None,\
weights_initializer=tf.truncated_normal_initializer(stddev=0.1))
# output layer logits
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)) # cross entropy loss
optimizer = tf.train.AdamOptimizer().minimize(cost) # training optimizer
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) | transfer-learning/Transfer_Learning.ipynb | Lstyle1/Deep_learning_projects | mit |
Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own! | epochs = 5
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# TODO: Your training code here
for epoch in range(epochs):
for x, y in get_batches(train_x, train_y):
loss, _ = sess.run([cost,optimizer], feed_dict={inputs_: x, labels_: y})
#if epoch % 5 == 0:
val_accuracy = sess.run(accuracy, feed_dict={inputs_: val_x, labels_: val_y})
print("Epoch: {:>3}, Training Loss: {:.5f}, Validation Accuracy: {:.4f}".format(epoch+1, loss, val_accuracy))
saver.save(sess, "checkpoints/flowers.ckpt") | transfer-learning/Transfer_Learning.ipynb | Lstyle1/Deep_learning_projects | mit |
Below, feel free to choose images and see how the trained classifier predicts the flowers in them. | test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), label_binarizer.classes_) | transfer-learning/Transfer_Learning.ipynb | Lstyle1/Deep_learning_projects | mit |
Kullback-Leibler Divergence
In this post we're going to take a look at way of comparing two probability distributions called Kullback-Leibler Divergence (a.k.a KL divergence). Very often in machine learning, we'll replace observed data or a complex distributions with a simpler, approximating distribution. KL Divergence helps us to measure just how much information we lose when we choose an approximation, thus we can even use it as our objective function to pick which approximation would work best for the problem at hand.
Let's look at an example: (The example here is borrowed from the following link. Blog: Kullback-Leibler Divergence Explained).
Suppose we're a group of scientists visiting space and we discovered some space worms. These space worms have varying number of teeth. After a decent amount of collecting, we have come to this empirical probability distribution of the number of teeth in each worm: | # ensure the probability adds up to 1
true_data = np.array([0.02, 0.03, 0.05, 0.14, 0.16, 0.15, 0.12, 0.08, 0.1, 0.08, 0.07])
n = true_data.shape[0]
index = np.arange(n)
assert sum(true_data) == 1.0
# change default style figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
plt.bar(index, true_data)
plt.xlabel('Teeth Number')
plt.title('Probability Distribution of Space Worm Teeth')
plt.ylabel('Probability')
plt.xticks(index)
plt.show() | model_selection/kl_divergence.ipynb | ethen8181/machine-learning | mit |
Now we need to send this information back to earth. But the problem is that sending information from space to earth is expensive. So we wish to represent this information with a minimum amount of information, perhaps just one or two parameters. One option to represent the distribution of teeth in worms is a uniform distribution. | uniform_data = np.full(n, 1.0 / n)
# we can plot our approximated distribution against the original distribution
width = 0.3
plt.bar(index, true_data, width=width, label='True')
plt.bar(index + width, uniform_data, width=width, label='Uniform')
plt.xlabel('Teeth Number')
plt.title('Probability Distribution of Space Worm Teeth')
plt.ylabel('Probability')
plt.xticks(index)
plt.legend()
plt.show() | model_selection/kl_divergence.ipynb | ethen8181/machine-learning | mit |
Another option is to use a binomial distribution. | # we estimate the parameter of the binomial distribution
p = true_data.dot(index) / n
print('p for binomial distribution:', p)
binom_data = binom.pmf(index, n, p)
binom_data
width = 0.3
plt.bar(index, true_data, width=width, label='True')
plt.bar(index + width, binom_data, width=width, label='Binomial')
plt.xlabel('Teeth Number')
plt.title('Probability Distribution of Space Worm Teeth')
plt.ylabel('Probability')
plt.xticks(np.arange(n))
plt.legend()
plt.show() | model_selection/kl_divergence.ipynb | ethen8181/machine-learning | mit |
Comparing each of our models with our original data we can see that neither one is the perfect match, but the question now becomes, which one is better? | plt.bar(index - width, true_data, width=width, label='True')
plt.bar(index, uniform_data, width=width, label='Uniform')
plt.bar(index + width, binom_data, width=width, label='Binomial')
plt.xlabel('Teeth Number')
plt.title('Probability Distribution of Space Worm Teeth Number')
plt.ylabel('Probability')
plt.xticks(index)
plt.legend()
plt.show() | model_selection/kl_divergence.ipynb | ethen8181/machine-learning | mit |
Given these two distributions that we are using to approximate the original distribution, we need a quantitative way to measure which one does the job better. This is where Kullback-Leibler (KL) Divergence comes in.
KL Divergence has its origins in information theory. The primary goal of information theory is to quantify how much information is in our data. To recap, one of the most important metric in information theory is called Entropy, which we will denote as $H$. The entropy for a probability distribution is defined as:
\begin{align}
H = -\sum_{i=1}^N p(x_i) \cdot \log p(x_i)
\end{align}
If we use $log_2$ for our calculation we can interpret entropy as, using a distribution $p$, the minimum number of bits it would take us to encode events drawn from distribution $p$. Knowing we have a way to quantify how much information is in our data, we now extend it to quantify how much information is lost when we substitute our observed distribution for a parameterized approximation.
The formula for Kullback-Leibler Divergence is a slight modification of entropy. Rather than just having our probability distribution $p$ we add in our approximating distribution $q$, then we look at the difference of the log values for each:
\begin{align}
D_{KL}(p || q) = \sum_{i=1}^{N} p(x_i)\cdot (\log p(x_i) - \log q(x_i))
\end{align}
Essentially, what we're looking at with KL divergence is the expectation of the log difference between the probability of data in the original distribution with the approximating distribution. Because we're multiplying the difference between the two distribution with $p(x_i)$, this means that matching areas where the original distribution has a higher probability is more important than areas that has a lower probability. Again, if we think in terms of $\log_2$, we can interpret this as, how many extra bits of information we need to encode events drawn from true distribution $p$, if using an optimal code from distribution $q$ rather than $p$.
The more common way to see KL divergence written is as follows:
\begin{align}
D_{KL}(p || q) = \sum_{i=1}^N p(x_i) \cdot \log \frac{p(x_i)}{q(x_i)}
\end{align}
since $\text{log}a - \text{log}b = \text{log}\frac{a}{b}$.
If two distributions, $p$ and $q$ perfectly match, $D_{KL}(p || q) = 0$, otherwise the lower the KL divergence value, the better we have matched the true distribution with our approximation.
Side Note: If you're interested in having an understanding of the relationship between entropy, cross entropy and KL divergence, the following links are good places to start. Maybe they will clear up some of the hand-wavy explanation of these concepts ... Youtube: A Short Introduction to Entropy, Cross-Entropy and KL-Divergence and StackExchange: Why do we use Kullback-Leibler divergence rather than cross entropy in the t-SNE objective function?
Given these information, we can go ahead and calculate the KL divergence for our two approximating distributions. | # both function are equivalent ways of computing KL-divergence
# one uses for loop and the other uses vectorization
def compute_kl_divergence(p_probs, q_probs):
""""KL (p || q)"""
kl_div = 0.0
for p, q in zip(p_probs, q_probs):
kl_div += p * np.log(p / q)
return kl_div
def compute_kl_divergence(p_probs, q_probs):
""""KL (p || q)"""
kl_div = p_probs * np.log(p_probs / q_probs)
return np.sum(kl_div)
print('KL(True||Uniform): ', compute_kl_divergence(true_data, uniform_data))
print('KL(True||Binomial): ', compute_kl_divergence(true_data, binom_data)) | model_selection/kl_divergence.ipynb | ethen8181/machine-learning | mit |
Array views and slicing
A NumPy array is an object of numpy.ndarray type: | a = np.arange(3)
type(a) | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
All ndarrays have a .base attribute.
If this attribute is not None, then the array is a view of some other object's memory, typically another ndarray.
This is a very powerful tool, because allocating memory and copying memory contents are expensive operations, but updating metadata on how to interpret some already allocated memory is cheap!
The simplest way of creating an array's view is by slicing it: | a = np.arange(3)
a.base is None
a[:].base is None | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
Let's look more closely at what an array's metadata looks like. NumPy provides the np.info function, which can list for us some low level attributes of an array: | np.info(a) | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
By the end of the workshop you will understand what most of these mean.
But rather than listen through a lesson, you get to try and figure what they mean yourself.
To help you with that, here's a function that prints the information from two arrays side by side: | def info_for_two(one_array, another_array):
"""Prints side-by-side results of running np.info on its inputs."""
def info_as_ordered_dict(array):
"""Converts return of np.infor into an ordered dict."""
import collections
import io
buffer = io.StringIO()
np.info(array, output=buffer)
data = (
item.split(':') for item in buffer.getvalue().strip().split('\n'))
return collections.OrderedDict(
((key, value.strip()) for key, value in data))
one_dict = info_as_ordered_dict(one_array)
another_dict = info_as_ordered_dict(another_array)
name_w = max(len(name) for name in one_dict.keys())
one_w = max(len(name) for name in one_dict.values())
another_w = max(len(name) for name in another_dict.values())
output = (
f'{name:<{name_w}} : {one:>{one_w}} : {another:>{another_w}}'
for name, one, another in zip(
one_dict.keys(), one_dict.values(), another_dict.values()))
print('\n'.join(output)) | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
Exercise 1.
Create a one dimensional NumPy array with a few items (consider using np.arange).
Compare the printout of np.info on your array and on slices of it (use the [start:stop:step] indexing syntax, and make sure to try steps other than one).
Do you see any patterns? | # Your code goes here | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
Exercise 1 debrief
Every array has an underlying block of memory assigned to it.
When we slice an array, rather than making a copy of it, NumPy makes a view, reusing the memory block, but interpreting it differently.
Lets take a look at what NumPy did for us in the above examples, and make sense of some of the changes to info.
shape: for a one dimensional array shape is a single item tuple, equal to the total number of items in the array. You can get the shape of an array as its .shape attribute.
strides: is also a single item tuple for one-dimensional arrays, its value being the number of bytes to skip in memory to get to the next item. And yes, strides can be negative. You can get this as the .strides attribute of any array.
data pointer: this is the address in memory of the first byte of the first item of the array. Note that this doesn't have to be the same as the first byte of the underlying memory block! You rarely need to know the exact address of the data pointer, but it's part of the string representation of the arrays .data attribute.
itemsize: this isn't properly an attribute of the array, but of it's data type. It is the number of bytes that an array item takes up in memory. You can get this value from an array as the .itemsize attribute of its .dtype attribute, i.e. array.dtype.itemsize.
type: this lets us know how each array item should be interpreted e.g. for calculations. We'll talk more about this later, but you can get an array's type object through its .dtype attribute.
contiguous: this is one of several boolean flags of an array. Its meaning is a little more specific, but for now lets say it tells us whether the array items use the memory block efficiently, without leaving unused spaces between items. It's value can be checked as the .contiguous attribute of the arrays .flags attribute
Exercise 2
Take a couple or minutes to familiarize yourself with the NumPy array's attributes discussed above:
Create a small one dimensional array of your choosing.
Look at its .shape, .strides, .dtype, .flags and .data attributes.
For .dtype and .flags, store them into a separate variable, and use tab completion on those to explore their subattributes. | # Your code goes here | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
A look at data types
Similarly to how we can change the shape, strides and data pointer of an array through slicing, we can change how it's items are interpreted by changing it's data type.
This is done by calling the array's .view() method, and passing it the new data type.
But before we go there, lets look a little closer at dtypes. You are hopefully familiar with the basic NumPy numerical data types:
| Type Family | NumPy Defined Types | Character Codes |
| :---: |
| boolean | np.bool | '?' |
| unsigned integers | np.uint8 - np.uint64 | 'u1', 'u2', 'u4', 'u8' |
| signed integers | np.int8 - np.int64 | 'i1', 'i2', 'i4', 'i8' |
| floating point | np.float16 - np.float128 | 'f2', 'f4', 'f8', 'f16' |
| complex | np.complex64, np.complex128 | 'c8', 'c16' |
You can create a new data type by calling its constructor, np.dtype(), with either a NumPy defined type, or the character code.
Character codes can have '<' or '>' prepended, to indicate whether the type is little or big endian. If unspecified, native encoding is used, which for all practical purposes is going to be little endian.
Exercise 3
Let's play a little with dtype views:
Create a simple array of a type you feel comfortable you understand, e.g. np.arange(4, dtype=np.uint16).
Take a view of type np.uint8 of your array. This will give you the raw byte contents of your array. Is this what you were expecting?
Take a few views of your array, with dtypes of larger itemsize, or changing the endianess of the data type. Try to predict what the output will be before running the examples.
Take a look at the wikipedia page on single precision floating point numbers, more specifically its examples of encodings. Create arrays of four np.uint8 values which, when viewed as a np.float32 give the values 1, -2, and 1/3. | # Your code goes here | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
The Constructor They Don't Want You To Know About.
You typically construct your NumPy arrays using one of the many factory fuctions provided, np.array() being the most popular.
But it is also possible to call the np.ndarray object constructor directly.
You will typically not want to do this, because there are probably simpler alternatives.
But it is a great way of putting your understanding of views of arrays to the test!
You can check the full documentation, but the np.ndarray constructor takes the following arguments that we care about:
shape: the shape of the returned array,
dtype: the data type of the returned array,
buffer: an object to reuse the underlying memory from, e.g. an existing array or its .data attribute,
offset: by how many bytes to move the starting data pointer of the returned array relative to the passed buffer,
strides: the strides of the returned array.
Exercise 4
Write a function, using the np.ndarray constructor, that takes a one dimensional array and returns a reversed view of it. | # Your code goes here | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
Reshaping Into Higher Dimensions
So far we have sticked to one dimensional arrays. Things get substantially more interesting when we move into higher dimensions.
One way of getting views with a different number of dimensions is by using the .reshape() method of NumPy arrays, or the equivalent np.reshape() function.
The first argument to any of the reshape functions is the new shape of the array. When providing it, keep in mind:
the total size of the array must stay unchanged, i.e. the product of the values of the new shape tuple must be equal to the product of the values of the old shape tuple.
by entering -1 for one of the new dimensions, you can have NumPy compute its value for you, but the other dimensions must be compatible with the calculated one being an integer.
.reshape() can also take an order= kwarg, which can be set to 'C' (as the programming language) or 'F' (for the Fortran programming language). This correspond to row and column major orders, respectively.
Exercise 5
Let's look at how multidimensional arrays are represented in NumPy with an exercise.
Create a small linear array with a total length that is a multiple of two different small primes, e.g. 6 = 2 * 3.
Reshape the array into a two dimensional one, starting with the default order='C'. Try both possible combinations of rows and columns, e.g. (2, 3) and (3, 2). Look at the resulting arrays, and compare their metadata. Do you understand what's going on?
Try the same reshaping with order='F'. Can you see what the differences are?
If you feel confident with these, give a higher dimensional array a try. | # Your code goes here | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
Exercise 5 debrief
As the examples show, an n-dimensional array will have an n item tuple .shape and .strides. The number of dimensions can be directly queried from the .ndim attribute.
The shape tells us how large the array is along each dimension, the strides tell us how many bytes to skip in memory to get to the next item along each dimension.
When we reshape an array using C order, a.k.a. row major order, items along higher dimensions are closer in memory. When we use Fortran orser, a.k.a. column major order, it is items along smaller dimensions that are closer.
Reshaping with a purpose
One typical use of reshaping is to apply some aggregation function to equal subdivision of an array.
Say you have, e.g. a 12 item 1D array, and would like to compute the sum of every three items. This is how this is typically accomplished: | a = np.arange(12, dtype=float)
a
a.reshape(4, 3).sum(axis=-1) | taking_numpy_in_stride/Taking NumPy In Stride - Student Version.ipynb | jaimefrio/pydatabcn2017 | unlicense |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.