markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The fundamental building block of Python code is an expression. Cells can contain multiple lines with multiple expressions. When you run a cell, the lines of code are executed in the order in which they appear. Every print expression prints a line. Run the next cell and notice the order of the output.
print("First this line is printed,") print("and then this one.")
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Writing Jupyter notebooks You can use Jupyter notebooks for your own projects or documents. When you make your own notebook, you'll need to create your own cells for text and code. To add a cell, click the + button in the menu bar. It'll start out as a text cell. You can change it to a code cell by clicking inside i...
# imports -- just run this cell import scipy import numpy as np import pandas as pd import seaborn as sns from scipy.stats import mode from ipywidgets import interact import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from matplotlib import colors from sklearn.linear_model import LinearRegress...
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
2. Creating our Dataframe <a id='dataframe'></a> We will start by familiarizing ourselves with the data. To visualize the data, we need to load the file first. In the line where we we assign file_name to equal the name of our dataset, which is a compilation of the results from the homework you completed last week. Note...
file_name = 'data/fall17.csv' data = pd.read_csv(file_name) data.head()
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
2.1 Adding features from our data We are going to add several columns to our dataframe. A column for each of the following: + The semester of this class (called class) + Average of all closure/vot for each individual (called clo/vot) + Average voiced closure/vot for each individual (called vclo/vvot) + Average voiceles...
subset = data[['pclo', 'tclo', 'kclo', 'bclo', 'dclo', 'gclo']] subset.head()
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Then we will take the average across those rows.
clo_avg = subset.mean(axis=1) clo_avg
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
And finally, we will append those values to our dataframe as a column called clo.
data['clo'] = clo_avg data.head()
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
We then repeat this process for all of the other columns that we want to create.
data['vot'] = data[['pvot', 'tvot', 'kvot', 'bvot', 'dvot', 'gvot']].mean(axis=1) data['vclo'] = data[['bclo', 'dclo', 'gclo']].mean(axis=1) data['vvot'] = data[['bvot', 'dvot', 'gvot']].mean(axis=1) data['vlclo'] = data[['pclo', 'tclo', 'kclo']].mean(axis=1) data['vlvot'] = data[['pvot', 'tvot', 'kvot']].mean(axis=1) ...
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
3. Exploring the Data <a id='exploring data'></a> 3.1 Descriptive Statistics Below we compute the some basic properties about the column clo.
closure_mode = mode(data['clo'])[0][0] print('Mode: ', closure_mode) data['clo'].describe()
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
We can calculate all of the above statistics (except mode) for the entire table with one line.
data.describe()
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
3.2 Data Visualization Now that we have our data in order, let's get a picture of the data with some plots. Let's start by visualizing the distribution of vot with a histogram.
sns.distplot(data['vot'], kde_kws={"label": "vot"})
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Next, we'll compare the distributions of the voiced and voiceless voice-onset times.
sns.distplot(data['vvot'], kde_kws={"label": "voiced vot"}) sns.distplot(data['vlvot'], kde_kws={"label": "voiceless vot"}) plt.xlabel('ms')
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
The distributions of the three voiceless stops are below.
sns.distplot(data['pvot'], kde_kws={"label": "pvot"}) sns.distplot(data['tvot'], kde_kws={"label": "tvot"}) sns.distplot(data['kvot'], kde_kws={"label": "kvot"}) plt.xlabel('ms') plt.ylabel('proportion per ms')
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
The distributions of the three voiced stops are below.
sns.distplot(data['bvot'], kde_kws={"label": "bvot"}) sns.distplot(data['dvot'], kde_kws={"label": "dvot"}) sns.distplot(data['gvot'], kde_kws={"label": "gvot"}) plt.xlabel('ms') plt.ylabel('proportion per ms')
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Below, we see the native languages represented in the data.
sns.countplot(y="language", data=data)
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Below, we have a the distribution of height.
sns.distplot(data['height']) plt.xlabel('height (cm)')
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
4. Relationships between closures <a id='closures'></a> Now will will shift back away from single column visualizations, and start to compare values between columns, looking specifically at the different closures in our dataframe. Run the cell below that will automate some of plotting for us.
def plot_with_equality_line(xs, ys, best_fit=False): fig, ax = plt.subplots() sns.regplot(xs, ys, fit_reg=best_fit, ax=ax) lims = [np.min([ax.get_xlim(), ax.get_ylim()]), np.max([ax.get_xlim(), ax.get_ylim()])] ax.plot(lims, lims, '--', alpha=0.75, zorder=0, c='black') ax.set_xlim(lims) ax.set_...
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
4.1 Using a line where x = y We'll start by making scatter plots. They takes the values (from identified columns) of individual rows, and plots them as a dot on our coordinate plane. So in the plot below, each point will represent a person's tclo and pclo. We are going to plot a dashed line that marks where the x-value...
plot_with_equality_line(data['tclo'], data['pclo']) plt.xlabel('tclo (ms)') plt.ylabel('pclo (ms)') plot_with_equality_line(data['kclo'], data['pclo']) plt.xlabel('kclo (ms)') plt.ylabel('pclo (ms)') plot_with_equality_line(data['kclo'], data['tclo']) plt.xlabel('kclo (ms)') plt.ylabel('tclo (ms)')
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
4.1.2 Voiced
plot_with_equality_line(data['dclo'], data['bclo']) plt.xlabel('dclo (ms)') plt.ylabel('bclo (ms)') plot_with_equality_line(data['gclo'], data['bclo']) plt.xlabel('gclo (ms)') plt.ylabel('bclo (ms)') plot_with_equality_line(data['gclo'], data['dclo']) plt.ylabel('dclo (ms)') plt.xlabel('gclo (ms)')
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
4.2 Using box-and-whisker plots Those scatter plots are informative, but sometimes it's difficult to make conclustions from them, especially in our case where we have so much raw data. To make easier comparisons about the ranges of values that our closures we can use boxplots.
sns.boxplot(data=data[['pclo', 'tclo', 'kclo']], width=.3, palette="Set3") plt.ylabel('duration (ms)') plt.xlabel('Voiceless Closures')
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
With the above plot, it can be different to compare values of the box-and-whisker plots because the outliers require us to zoom out. Below, we will zoom in to the boxes.
sns.boxplot(data=data[['pclo', 'tclo', 'kclo']], width=.3, palette="Set3") plt.ylabel('duration (ms)') plt.xlabel('Voiceless Closures') plt.ylim(0, 212)
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
We then recreate those graphs, but using our voiced closures.
sns.boxplot(data=data[['bclo', 'dclo', 'gclo']], width=.3, palette="Set2") plt.ylabel('duration (ms)') plt.xlabel('Voiced Closures') sns.boxplot(data=data[['bclo', 'dclo', 'gclo']], width=.3, palette="Set2") plt.ylabel('duration (ms)') plt.xlabel('Voiced Closures') plt.ylim(0, 212)
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Do our box-whisker plots corroborate the scatter plot data? Are we able to come to the same conclusions that we were before? 5. Explore relationships to metadata <a id='metadata'></a> Now let's explore relationships between closure and different characteristics of the persons who delivered those stats, looking at langu...
sns.violinplot(x="vot", y="language", data=data) plt.xlabel('vot (ms)')
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Compare the distributions. Can you make any meaningful observations? 5.2 Height Now we'll look at how height influences closure, but first we are going to trim out one of the outliers.
trimmed = data[data['clo'] < 250] sns.lmplot('height', 'clo', data=trimmed, fit_reg=True) plt.xlabel('height (cm)') plt.ylabel('clo (ms)')
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
In the scatter plot above, each dot represents the average closure and height of an individual. Change "fit_reg" to "True" in the code above to see the regression line. What does this graph tell about the relationship between height and closure? Regression lines describe a general trend of the data, sometimes referred...
sns.regplot('height', 'vclo', data=trimmed, fit_reg=True) sns.regplot('height', 'vlclo', data=trimmed, fit_reg=True) plt.xlabel('height (cm)') plt.ylabel('clo (ms)')
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
5.3 Visualizing Multiple Features So far, we've been presenting two kinds of information in one plot (e.g. language vs. closure). Would presenting more than two at once help us at analyzing? Let's try it. Below, the color of the dots will depend on the language that person speaks rather than its gender.
sns.lmplot('height', 'clo',data=trimmed, fit_reg=False, hue="language") plt.xlabel('height (cm)') plt.ylabel('clo (ms)')
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
What conclusions can you make from the graph above, if any? Is it easy to analyze this plot? Why? The lesson here is that sometimes less is more. 6. Compare our data with data from last semester <a id='to class'></a> It's often useful to compare current data with past data. Below, we'll explore class data collected fro...
old_file_name = 'data/fall15.csv' fa15 = pd.read_csv(old_file_name) fa15.head()
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
The data from the previous semester does not have all of the same features (columns) that this semester's data has. So in order to make easy comparisons, we will just select out the columns that are in both dataframes.
current_subset = data[fa15.columns] current_subset.head()
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Let's look at the difference between the major statistics of the previous data and this semester's.
difference = fa15.describe() - current_subset.describe() difference
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
It's a little unintuitive to tell how large of differences those are, so let's look at the relative difference to this semester's data.
relative_difference = difference / current_subset.describe() relative_difference
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Now, let's add some color to help spot the largest relative changes. Run the next two cells.
scale = pd.DataFrame({'scale': np.arange(-3,5,1)*.2}).set_index(relative_difference.index) def background_gradient(s, df, m=None, M=None, cmap='RdBu_r', low=0, high=0): # code modified from: https://stackoverflow.com/questions/38931566/pandas-style-background-gradient-both-rows-and-colums if m is None: ...
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Now that we can see where the largest relative differences between this semester's and the prior semester's data are, let's take a look at them with further visualization. We'll start with vot because the column has quite a few rows with dark colors.
sns.distplot(data['vot'], kde_kws={"label": "Fall 2017 vot"}) sns.distplot(fa15['vot'], kde_kws={"label": "Fall 2015 vot"}) plt.xlabel('ms')
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Why is this? The graph below should offer some insight.
sns.distplot(data['vlvot'], kde_kws={"label": "Fall 2017 vlvot"}) # notice the call to voiced vot sns.distplot(fa15['vot'], kde_kws={"label": "Fall 2015 vot"}) plt.xlabel('ms')
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
There are some large differences for kvot, so let's take a look at those distributions.
sns.distplot(fa15['kvot'], kde_kws={"label": "Fall 2015 kvot"}) sns.distplot(data['kvot'], kde_kws={"label": "Fall 2017 kvot"}) plt.xlabel('kvot (ms)')
VOT/Assignment.ipynb
ds-modules/LINGUIS-110
mit
2.Baue einen for-loop, der durch vordefinierte Zahlen-list geht, und mithilfe der eben kreierten eigenen Funktion, alle Resultate verdoppelt ausdruckt.
lst = list(range(1,5))
03 intro python II/02 Python II Hausaugaben.ipynb
barjacks/pythonrecherche
mit
3.Entwickle einen Code, der den Nutzer nach der Länge seinem Namen fragt, und ihm dann sagt, wieviele Zeichen sein Name hat. 4.Entwickle eine Funktion mit dem Namen km_rechner, der für die untenaufgeführten automatisch die Umrechung von Meilen in km durchführt und gerundet auf eine Kommastelle anzeigt.
km_rechner(5) km_rechner(123) km_rechner(53)
03 intro python II/02 Python II Hausaugaben.ipynb
barjacks/pythonrecherche
mit
5.Wir haben in einem Dictionary mit Massen, die mit ganz unterschiedlichen Formaten daherkommen. Entwickle eine Funktion namens m_converter, die diese Formate berücksichtigt, und in Meter umwandelt.
#Unsere Formate var_first = { 'measurement': 3.4, 'scale': 'kilometer' } var_second = { 'measurement': 9.1, 'scale': 'mile' } var_third = { 'measurement': 2.0, 'scale': 'meter' } var_fourth = { 'measurement': 9.0, 'scale': 'inches' } print(m_converter(var_first)) print(m_converter(var_second)) print(m_converter(var_th...
03 intro python II/02 Python II Hausaugaben.ipynb
barjacks/pythonrecherche
mit
This will generate one "station_name.hdf5" and one "station_name.csv" file for each of your stations and put them into a directory named "mseed_dir+_hdfs". Then you need to pass the name of the directory containing your hdf5 & CSV files and a model. You can use relatively low threshold values for the detection and pick...
from EQTransformer.core.predictor import predictor predictor(input_dir='downloads_mseeds_processed_hdfs', input_model='../ModelsAndSampleData/EqT_original_model.h5', output_dir='detections1', estimate_uncertainty=False, output_probabilities=False, number_of_sampling=5, ...
examples/detection.ipynb
smousavi05/EQTransformer
mit
If you are using local MiniSeed files you can generate a station_list.json by supplying an absolute path to a directory containing Miniseed files and a station location dictionary using the stationListFromMseed function like the following:
from EQTransformer.utils.hdf5_maker import stationListFromMseed mseed_directory = '/Users/username/Downloads/EQTransformer/examples/downloads_mseeds' station_locations = {"CA06": [35.59962, -117.49268, 796.4], "CA10": [35.56736, -117.667427, 835.9]} stationListFromMseed(mseed_directory, station_locations)
examples/detection.ipynb
smousavi05/EQTransformer
mit
Option (II) directly on downloaded MiniSeed files: You can perform the detection/picking directly on .mseed files. This save both prerpcessing time and the extra space needed for hdf5 file. However, it can be more memory intensive. So it is recommended when mseed fils are one month long or shorter. This option also do...
from EQTransformer.core.mseed_predictor import mseed_predictor mseed_predictor(input_dir='downloads_mseeds', input_model='../ModelsAndSampleData/EqT_original_model.h5', stations_json=json_basepath, output_dir='detections2', loss_weights=[0.02, 0.40, 0.58], detec...
examples/detection.ipynb
smousavi05/EQTransformer
mit
Chapter 8 - Comparison of lists and sets You've been introduced to two containers in this topic: lists and sets. However, a question we often get is when to use a list and when a set. The goal of this chapter is to help you answer that question. At the end of this chapter, you will be able to: * decide when to use a l...
list1 = [1, 2, 1, 3, 4, 1] set1 = {1, 2, 3, 4} set2 = {1, 2, 1, 3, 4, 1} print('list1', list1) print('set1', set1) print('set2', set2) print('set1 is the same as set2:', set1 == set2)
Chapters-colab/Chapter_08_Comparison_of_lists_and_sets.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Tip You can create a set from a list. Attention: duplicates will be removed.
a_list = [1,2,3,4, 4] a_set = set(a_list) print(a_list) print(a_set)
Chapters-colab/Chapter_08_Comparison_of_lists_and_sets.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
1.2 Order (with respect to how elements are added to it) list: yes set: no The order in which you add elements to a list matters. Please look at the following example:
a_list = [] a_list.append(2) a_list.append(1) print(a_list)
Chapters-colab/Chapter_08_Comparison_of_lists_and_sets.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
However, this information is not kept in sets:
a_set = set() a_set.add(2) a_set.add(1) print(a_set)
Chapters-colab/Chapter_08_Comparison_of_lists_and_sets.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Is it possible to understand the order of items in a set? Yes, but we will not cover it here since it is not important for the tasks we cover. What is then the take home message about order? The answer is: you have it for lists, but not for sets. If you want to learn more about this, look up the data structure called h...
list1 = [1,2,3,4] print(1 in list1)
Chapters-colab/Chapter_08_Comparison_of_lists_and_sets.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
And this will usually be relatively quick:
set1 = {1,2,3,4} print(1 in set1)
Chapters-colab/Chapter_08_Comparison_of_lists_and_sets.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Is it possible to understand the speed of finding elements of items in sets and lists? Yes, but we will not cover it here since it is not important for the tasks we cover. What is then the take home message about speed? The answer is: it's probably quicker to use sets. 1.4 Mutability of elements in can contain sets can...
a_set = set() a_set.add(1) print(a_set)
Chapters-colab/Chapter_08_Comparison_of_lists_and_sets.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
This does not
a_set.add([1])
Chapters-colab/Chapter_08_Comparison_of_lists_and_sets.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
lists can contain any Python object. This works:
a_list = [] a_list.append(1) print(a_list)
Chapters-colab/Chapter_08_Comparison_of_lists_and_sets.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
This as well
a_list = [] a_list.append([1]) print(a_list)
Chapters-colab/Chapter_08_Comparison_of_lists_and_sets.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Magic Command %matplotlib inline: show() 생략 %matplotlib qt: 외부창 출력
# make point with cumulative sum points = np.random.randn(50).cumsum() points
scripts/[HYStudy 15th] Matplotlib 2.ipynb
Lattecom/HYStudy
mit
Line plot
# plt.plot(x, y): x, y = point(x, y) on coordinate # put y only(default x = auto) plt.plot(points) plt.show() # put x and y points plt.plot(range(0, 250, 5), points) plt.show()
scripts/[HYStudy 15th] Matplotlib 2.ipynb
Lattecom/HYStudy
mit
Style setting {color}{marker}{line} color: http://matplotlib.org/examples/color/named_colors.html marker: http://matplotlib.org/api/markers_api.html?highlight=marker#module-matplotlib.markers line: http://matplotlib.org/api/lines_api.html?highlight=line#matplotlib.lines.Line2D.set_linestyle style(other attributes)...
# set color, marker, line plt.plot(points, 'co:') plt.show() # style setting plt.plot(points, 'co-', lw=3, ms=5, mfc='b') # lw=linewidth, ms=marker size, mfc=marker face color plt.xlim(-10, 60) # set x axis limit plt.ylim(-5, 5) # set y axis limit plt.show() # style setting plt.plot(points, 'co-', lw=3, ms=5, mfc='b'...
scripts/[HYStudy 15th] Matplotlib 2.ipynb
Lattecom/HYStudy
mit
Legend, Title plt.legend(loc=x): x = legend location set legend location: https://matplotlib.org/api/legend_api.html plt.xlabel("label name"): set x label as "label name" plt.ylabel("label name"): set y label as "label name" plt.title("plot title"): set plot title as "plot title")
# legend, title plt.rc('font', family='nanumgothic') # set font family, use Korean plt.plot(points, label='random points') # set plot 1 label plt.plot(0.5 * points, label='임의값') # set plot 2 label plt.legend() plt.xlabel('random x') # set x label plt.ylabel('random y') # set y label plt.title('random plot') # set the t...
scripts/[HYStudy 15th] Matplotlib 2.ipynb
Lattecom/HYStudy
mit
Annotation annotation attributes: https://matplotlib.org/users/annotations_intro.html more details: https://matplotlib.org/users/annotations_guide.html#plotting-guide-annotation
plt.plot(points) plt.annotate(# text, arrow point(x, y), xy coordinate r'(text)', xy=(40, -4), xycoords='data', # text location from text coordinate, text coordinate xytext=(-50, 50), textcoords='offset points', # font, arrow shape fontsize=20, arrowprops...
scripts/[HYStudy 15th] Matplotlib 2.ipynb
Lattecom/HYStudy
mit
Figure size plt.figure(figsize=(x, y)): set the figure size as x, y
plt.figure(figsize=(20, 3)) plt.plot(points) plt.show()
scripts/[HYStudy 15th] Matplotlib 2.ipynb
Lattecom/HYStudy
mit
Axes, Subplots Doc: http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes plt.subplot(X, Y, Z): make subplots shape as (X, Y), and Z is location number in (X, Y)
ax1 = plt.subplot(2, 1, 1) plt.plot(points) ax2 = plt.subplot(2, 1, 2) plt.plot(np.random.randn(50)) plt.show()
scripts/[HYStudy 15th] Matplotlib 2.ipynb
Lattecom/HYStudy
mit
Bar chart Doc for vertical bar: http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.bar Doc for horizontal bar: http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.barh
x = [3, 2, 1] y = [1, 2, 3] xlabel = ['한개', '두개', '세개'] # plt.bar: vertical / plt.barh: horizontal plt.bar(x, y, align='center') # align: ceter(default), edge plt.xticks(x, xlabel) plt.show()
scripts/[HYStudy 15th] Matplotlib 2.ipynb
Lattecom/HYStudy
mit
Histogram Doc: http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.hist
x = np.random.randint(0, 10, 10) print(x) arrays, bins, patches = plt.hist(x, bins=6) plt.show() # value counts for each bin print(arrays) # the range of each bin print(bins)
scripts/[HYStudy 15th] Matplotlib 2.ipynb
Lattecom/HYStudy
mit
Pie chart Doc: http://matplotlib.org/1.5.1/api/pyplot_api.html#matplotlib.pyplot.pie Demo: https://matplotlib.org/1.5.3/examples/pylab_examples/pie_demo2.html
plt.pie([30, 50, 10], # size labels = ['피자', '햄버거', '감자튀김'], # label colors = ['pink', 'salmon', 'tomato'], # colors explode = (0.01, 0.01, 0.2), # explode autopct = '%.2f%%', # set the ratio label format shadow = True, # pie chart shadow startangle = 0) # rotate the char...
scripts/[HYStudy 15th] Matplotlib 2.ipynb
Lattecom/HYStudy
mit
Then read the MNIST data.
data = get_MNIST_data(subtract_mean=False) # check if we load the data successfully print(data['X_train'].shape)
jupyter/Traditional methods - opencv+scikit_learn.ipynb
lizoyu/kaggle-DigitRecognizer
gpl-3.0
Preprocessing Feature extraction Different methods exist to extract feature. Here we try ORB (Oriented FAST and Rotated BRIEF).
# check the min number of keypoints orb = cv2.ORB_create(edgeThreshold=2, patchSize=2) len_k = 500 for key in ['X_train', 'X_test']: for img in data[key]: k = orb.detect(img.astype(np.uint8).reshape((28,28))) if len(k) < len_k: len_k = len(k) print('minimum number of keypoints:', len_k) ...
jupyter/Traditional methods - opencv+scikit_learn.ipynb
lizoyu/kaggle-DigitRecognizer
gpl-3.0
Here we try HOG (Histogram of Oriented Gradients).
# compute the HOG for each image feats = {'X_train': [], 'X_test': []} for key in feats.keys(): print('compute for data: ', key) for img in data[key]: feat = hog(img.reshape((28,28)), pixels_per_cell=(7,7), cells_per_block=(4,4), block_norm='L2-Hy...
jupyter/Traditional methods - opencv+scikit_learn.ipynb
lizoyu/kaggle-DigitRecognizer
gpl-3.0
It's possible to use PCA to reduce dimensions of feature to avoid curse of dimensionality for common classifiers.
# initialize PCA with top 50 pca = PCA(n_components=50) pca.fit(feats['X_train']) feats_reduce = {'X_train': [], 'X_test': []} for key in feats.keys(): feats_reduce[key] = pca.transform(feats[key]) # check the computed features size print(feats_reduce['X_train'].shape) print(feats_reduce['X_test'].shape)
jupyter/Traditional methods - opencv+scikit_learn.ipynb
lizoyu/kaggle-DigitRecognizer
gpl-3.0
Classification Different machine learning methods are used to classify the digits.
# decision tree dt = DecisionTreeClassifier() dt.fit(feats['X_train'],data['y_train']) print(dt.score(feats['X_test'], data['y_test'])) # test accuracy of 57.2% using ORB # test accuracy of 90.2% using HOG (7, 2) # test accuracy of 90.3% using HOG (7, 4) # decision tree for reduced data dt = DecisionTreeClassifier() d...
jupyter/Traditional methods - opencv+scikit_learn.ipynb
lizoyu/kaggle-DigitRecognizer
gpl-3.0
TF Lattice Canned Estimators <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/canned_estimators"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https:...
#@test {"skip": true} !pip install tensorflow-lattice
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
tensorflow/docs-l10n
apache-2.0
Importing required packages:
import tensorflow as tf import copy import logging import numpy as np import pandas as pd import sys import tensorflow_lattice as tfl from tensorflow import feature_column as fc logging.disable(sys.maxsize)
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
tensorflow/docs-l10n
apache-2.0
Downloading the UCI Statlog (Heart) dataset:
csv_file = tf.keras.utils.get_file( 'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv') df = pd.read_csv(csv_file) target = df.pop('target') train_size = int(len(df) * 0.8) train_x = df[:train_size] train_y = target[:train_size] test_x = df[train_size:] test_y = target[train_size:] ...
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
tensorflow/docs-l10n
apache-2.0
Setting the default values used for training in this guide:
LEARNING_RATE = 0.01 BATCH_SIZE = 128 NUM_EPOCHS = 500 PREFITTING_NUM_EPOCHS = 10
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
tensorflow/docs-l10n
apache-2.0
Feature Columns As for any other TF estimator, data needs to be passed to the estimator, which is typically via an input_fn and parsed using FeatureColumns.
# Feature columns. # - age # - sex # - cp chest pain type (4 values) # - trestbps resting blood pressure # - chol serum cholestoral in mg/dl # - fbs fasting blood sugar > 120 mg/dl # - restecg resting electrocardiographic results (values 0,1,2) # - thalach maximum heart rate achieved # - exang ...
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
tensorflow/docs-l10n
apache-2.0
TFL canned estimators use the type of the feature column to decide what type of calibration layer to use. We use a tfl.layers.PWLCalibration layer for numeric feature columns and a tfl.layers.CategoricalCalibration layer for categorical feature columns. Note that categorical feature columns are not wrapped by an embedd...
train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn( x=train_x, y=train_y, shuffle=False, batch_size=BATCH_SIZE, num_epochs=NUM_EPOCHS, num_threads=1) # feature_analysis_input_fn is used to collect statistics about the input. feature_analysis_input_fn = tf.compat.v1.estimator.inputs....
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
tensorflow/docs-l10n
apache-2.0
Feature Configs Feature calibration and per-feature configurations are set using tfl.configs.FeatureConfig. Feature configurations include monotonicity constraints, per-feature regularization (see tfl.configs.RegularizerConfig), and lattice sizes for lattice models. If no configuration is defined for an input feature, ...
# Feature configs are used to specify how each feature is calibrated and used. feature_configs = [ tfl.configs.FeatureConfig( name='age', lattice_size=3, # By default, input keypoints of pwl are quantiles of the feature. pwl_calibration_num_keypoints=5, monotonicity='increasi...
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
tensorflow/docs-l10n
apache-2.0
Calibrated Linear Model To construct a TFL canned estimator, construct a model configuration from tfl.configs. A calibrated linear model is constructed using tfl.configs.CalibratedLinearConfig. It applies piecewise-linear and categorical calibration on the input features, followed by a linear combination and an optiona...
# Model config defines the model structure for the estimator. model_config = tfl.configs.CalibratedLinearConfig( feature_configs=feature_configs, use_bias=True, output_calibration=True, regularizer_configs=[ # Regularizer for the output calibrator. tfl.configs.RegularizerConfig(name='out...
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
tensorflow/docs-l10n
apache-2.0
Calibrated Lattice Model A calibrated lattice model is constructed using tfl.configs.CalibratedLatticeConfig. A calibrated lattice model applies piecewise-linear and categorical calibration on the input features, followed by a lattice model and an optional output piecewise-linear calibration. This example creates a cal...
# This is calibrated lattice model: Inputs are calibrated, then combined # non-linearly using a lattice layer. model_config = tfl.configs.CalibratedLatticeConfig( feature_configs=feature_configs, regularizer_configs=[ # Torsion regularizer applied to the lattice to make it more linear. tfl.confi...
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
tensorflow/docs-l10n
apache-2.0
Calibrated Lattice Ensemble When the number of features is large, you can use an ensemble model, which creates multiple smaller lattices for subsets of the features and averages their output instead of creating just a single huge lattice. Ensemble lattice models are constructed using tfl.configs.CalibratedLatticeEnsemb...
# This is random lattice ensemble model with separate calibration: # model output is the average output of separately calibrated lattices. model_config = tfl.configs.CalibratedLatticeEnsembleConfig( feature_configs=feature_configs, num_lattices=5, lattice_rank=3) # A CannedClassifier is constructed from the...
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
tensorflow/docs-l10n
apache-2.0
RTL Layer Random Lattice Ensemble The following model config uses a tfl.layers.RTL layer that uses a random subset of features for each lattice. We note that tfl.layers.RTL only supports monotonicity constraints and must have the same lattice size for all features and no per-feature regularization. Note that using a tf...
# Make sure our feature configs have the same lattice size, no per-feature # regularization, and only monotonicity constraints. rtl_layer_feature_configs = copy.deepcopy(feature_configs) for feature_config in rtl_layer_feature_configs: feature_config.lattice_size = 2 feature_config.unimodality = 'none' feature_co...
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
tensorflow/docs-l10n
apache-2.0
Crystals Lattice Ensemble TFL also provides a heuristic feature arrangement algorithm, called Crystals. The Crystals algorithm first trains a prefitting model that estimates pairwise feature interactions. It then arranges the final ensemble such that features with more non-linear interactions are in the same lattices. ...
prefitting_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn( x=train_x, y=train_y, shuffle=False, batch_size=BATCH_SIZE, num_epochs=PREFITTING_NUM_EPOCHS, num_threads=1)
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
tensorflow/docs-l10n
apache-2.0
You can then create a Crystal model by setting lattice='crystals' in the model config.
# This is Crystals ensemble model with separate calibration: model output is # the average output of separately calibrated lattices. model_config = tfl.configs.CalibratedLatticeEnsembleConfig( feature_configs=feature_configs, lattices='crystals', num_lattices=5, lattice_rank=3) # A CannedClassifier is c...
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
tensorflow/docs-l10n
apache-2.0
You can plot feature calibrators with more details using the tfl.visualization module.
_ = tfl.visualization.plot_feature_calibrator(model_graph, "age") _ = tfl.visualization.plot_feature_calibrator(model_graph, "restecg")
site/en-snapshot/lattice/tutorials/canned_estimators.ipynb
tensorflow/docs-l10n
apache-2.0
Esercizio 1: Regressione Lineare Si usino le slide usate a lezione. Confrontare i coefficienti $w$ trovati dalla vostra soluzione con quelli che trova la classe LinearRegression della libreria Scikit Learn.
class RegressioneLineare(object): def fit(self, x, y): # Build the matrix with vector (1, x) as rows X = np.matrix(list(map(lambda row: np.append([1], row), x))) # Solve the normal equation (what if X is not invertible?) self.w = matmul(matmul(inv(matmul(transpose(X), X)), transpose(...
Lab 7 - Implementazione di regressioni.ipynb
mathcoding/Programmazione2
mit
Esercizio 2: Regressione Logistica Si usino le slide usate a lezione. Confrontare i coefficienti $w$ trovati dalla vostra soluzione con quelli che trova la classe LinearRegression della libreria Scikit Learn.
class RegressioneLogistica(object): def fit(self, x, y): # DA COMPLETARE: METODO DI NEWTON RAPHSON SULLE SLIDES pass def predict(self, x): # DA COMPLETARE: USARE I PARAMETRI w pass
Lab 7 - Implementazione di regressioni.ipynb
mathcoding/Programmazione2
mit
In the original data, each Wikipedia article is represented by a URI, a name, and a string containing the entire text of the article. Recall from the video lectures that LDA requires documents to be represented as a bag of words, which ignores word ordering in the document but retains information on how many times each...
wiki_docs = gl.text_analytics.count_words(wiki['text']) wiki_docs = wiki_docs.dict_trim_by_keys(gl.text_analytics.stopwords(), exclude=True)
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
Model fitting and interpretation In the video lectures we saw that Gibbs sampling can be used to perform inference in the LDA model. In this assignment we will use a GraphLab Create method to learn the topic model for our Wikipedia data, and our main emphasis will be on interpreting the results. We'll begin by creating...
topic_model = gl.topic_model.create(wiki_docs, num_topics=10, num_iterations=200)
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
GraphLab provides a useful summary of the model we have fitted, including the hyperparameter settings for alpha, gamma (note that GraphLab Create calls this parameter beta), and K (the number of topics); the structure of the output data; and some useful methods for understanding the results.
topic_model
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
It is certainly useful to have pre-implemented methods available for LDA, but as with our previous methods for clustering and retrieval, implementing and fitting the model gets us only halfway towards our objective. We now need to analyze the fitted model to understand what it has done with our data and whether it will...
topic_model = gl.load_model('topic_models/lda_assignment_topic_model')
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
Identifying topic themes by top words We'll start by trying to identify the topics learned by our model with some major themes. As a preliminary check on the results of applying this method, it is reasonable to hope that the model has been able to learn topics that correspond to recognizable categories. In order to do ...
[x['words'] for x in topic_model.get_topics(output_type='topic_words', num_words=10)] sum(topic_model.get_topics(topic_ids=[2], num_words=50)['score'])
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
We propose the following themes for each topic: topic 0: Science and research topic 2: Team sports topic 3: Music, TV, and film topic 4: American college and politics topic 5: General politics topic 6: Art and publishing topic 7: Business topic 8: International athletics topic 9: Great Britain and Australia topic 10: ...
themes = ['science and research','team sports','music, TV, and film','American college and politics','general politics', \ 'art and publishing','Business','international athletics','Great Britain and Australia','international music']
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
Measuring the importance of top words We can learn more about topics by exploring how they place probability mass (which we can think of as a weight) on each of their top words. We'll do this with two visualizations of the weights for the top words in each topic: - the weights of the top 100 words, sorted by the size ...
for i in range(10): plt.plot(range(100), topic_model.get_topics(topic_ids=[i], num_words=100)['score']) plt.xlabel('Word rank') plt.ylabel('Probability') plt.title('Probabilities of Top 100 Words in each Topic')
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
In the above plot, each line corresponds to one of our ten topics. Notice how for each topic, the weights drop off sharply as we move down the ranked list of most important words. This shows that the top 10-20 words in each topic are assigned a much greater weight than the remaining words - and remember from the summar...
top_probs = [sum(topic_model.get_topics(topic_ids=[i], num_words=10)['score']) for i in range(10)] ind = np.arange(10) width = 0.5 fig, ax = plt.subplots() ax.bar(ind-(width/2),top_probs,width) ax.set_xticks(ind) plt.xlabel('Topic') plt.ylabel('Probability') plt.title('Total Probability of Top 10 Words in each Topi...
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
Here we see that, for our topic model, the top 10 words only account for a small fraction (in this case, between 5% and 13%) of their topic's total probability mass. So while we can use the top words to identify broad themes for each topic, we should keep in mind that in reality these topics are more complex than a sim...
obama = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Barack Obama')[0])]]) pred1 = topic_model.predict(obama, output_type='probability') pred2 = topic_model.predict(obama, output_type='probability') print(gl.SFrame({'topics':themes, 'predictions (first draw)':pred1[0], 'predictions (second draw)':pred2[0]}))
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
To get a more robust estimate of the topics for each document, we can average a large number of predictions for the same document:
def average_predictions(model, test_document, num_trials=100): avg_preds = np.zeros((model.num_topics)) for i in range(num_trials): avg_preds += model.predict(test_document, output_type='probability')[0] avg_preds = avg_preds/num_trials result = gl.SFrame({'topics':themes, 'average predictions':...
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
Quiz Question: What is the topic most closely associated with the article about former US President George W. Bush? Use the average results from 100 topic predictions. Quiz Question: What are the top 3 topics corresponding to the article about English football (soccer) player Steven Gerrard? Use the average results fro...
wiki['lda'] = topic_model.predict(wiki_docs, output_type='probability')
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
Next we add the TF-IDF document representations:
wiki['word_count'] = gl.text_analytics.count_words(wiki['text']) wiki['tf_idf'] = gl.text_analytics.tf_idf(wiki['word_count'])
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
For each of our two different document representations, we can use GraphLab Create to compute a brute-force nearest neighbors model:
model_tf_idf = gl.nearest_neighbors.create(wiki, label='name', features=['tf_idf'], method='brute_force', distance='cosine') model_lda_rep = gl.nearest_neighbors.create(wiki, label='name', features=['lda'], method='brute_force', dist...
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
Let's compare these nearest neighbor models by finding the nearest neighbors under each representation on an example document. For this example we'll use Paul Krugman, an American economist:
model_tf_idf.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10) model_lda_rep.query(wiki[wiki['name'] == 'Paul Krugman'], label='name', k=10) k5000 = model_tf_idf.query(wiki[wiki['name'] == 'Alex Rodriguez'], label='name', k=5000) k5000[k5000['reference_label'] == 'Mariano Rivera'] l5000 = model_lda_rep...
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
Notice that that there is no overlap between the two sets of top 10 nearest neighbors. This doesn't necessarily mean that one representation is better or worse than the other, but rather that they are picking out different features of the documents. With TF-IDF, documents are distinguished by the frequency of uncommon...
tpm_low_alpha = gl.load_model('topic_models/lda_low_alpha') tpm_high_alpha = gl.load_model('topic_models/lda_high_alpha')
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
Changing the hyperparameter alpha Since alpha is responsible for smoothing document preferences over topics, the impact of changing its value should be visible when we plot the distribution of topic weights for the same document under models fit with different alpha values. In the code below, we plot the (sorted) topic...
a = np.sort(tpm_low_alpha.predict(obama,output_type='probability')[0])[::-1] b = np.sort(topic_model.predict(obama,output_type='probability')[0])[::-1] c = np.sort(tpm_high_alpha.predict(obama,output_type='probability')[0])[::-1] ind = np.arange(len(a)) width = 0.3 def param_bar_plot(a,b,c,ind,width,ylim,param,xlab,yl...
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
Here we can clearly see the smoothing enforced by the alpha parameter - notice that when alpha is low most of the weight in the topic distribution for this article goes to a single topic, but when alpha is high the weight is much more evenly distributed across the topics. Quiz Question: How many topics are assigned a w...
pk = gl.SArray([wiki_docs[int(np.where(wiki['name']=='Paul Krugman')[0])]]) pk1 = tpm_high_alpha.predict(pk, output_type='probability') pk2 = tpm_high_alpha.predict(pk, output_type='probability') print(gl.SFrame({'topics':themes, 'predictions (first draw)':pk1[0], 'predictions (second draw)':pk2[0]})) print average_pre...
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
Quiz Question: How many topics are assigned a weight greater than 0.3 or less than 0.05 for the article on Paul Krugman in the high alpha model? Use the average results from 100 topic predictions. Changing the hyperparameter gamma Just as we were able to see the effect of alpha by plotting topic weights for a document,...
del tpm_low_alpha del tpm_high_alpha tpm_low_gamma = gl.load_model('topic_models/lda_low_gamma') tpm_high_gamma = gl.load_model('topic_models/lda_high_gamma') a_top = np.sort([sum(tpm_low_gamma.get_topics(topic_ids=[i], num_words=100)['score']) for i in range(10)])[::-1] b_top = np.sort([sum(topic_model.get_topics(top...
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
From these two plots we can see that the low gamma model results in higher weight placed on the top words and lower weight placed on the bottom words for each topic, while the high gamma model places relatively less weight on the top words and more weight on the bottom words. Thus increasing gamma results in topics tha...
sum([len(tpm_low_gamma.get_topics(topic_ids=[i], num_words=5000, cdf_cutoff = 0.5)['score']) for i in range(10)])/10.0
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0
Quiz Question: For each topic of the high gamma model, compute the number of words required to make a list with total probability 0.5. What is the average number of words required across all topics? (HINT: use the get_topics() function from GraphLab Create with the cdf_cutoff argument).
sum([len(tpm_high_gamma.get_topics(topic_ids=[i], num_words=5000, cdf_cutoff = 0.5)) for i in range(10)])/10.0 tpm_high_gamma.get_topics(topic_ids=[1],num_words=1000, cdf_cutoff = 0.5)
coursera/ml-clustering-and-retrieval/assignments/5_lda_blank.ipynb
jinntrance/MOOC
cc0-1.0