markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Using OOI data in the cloud with PangeoIn this workshop we will use [Pangeo's](http://pangeo.io/) [Binderhub](https://binderhub.readthedocs.io/en/latest/) to do some science with OOI data in the cloud together. Since we are working today on the Binderhub our work will be ephemeral but if you would like to continue wor...
import pandas as pd import xarray as xr import hvplot.xarray import hvplot.pandas from matplotlib import pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (14, 8)
_____no_output_____
MIT
ooi-botpt.ipynb
tjcrone/ooi-pangeo-virtual-booth-2021
Next find some data on the OOI Data Explorer Link to new OOI Data Explorer: https://dataexplorer.oceanobservatories.org/
from erddapy import ERDDAP server = 'http://erddap.dataexplorer.oceanobservatories.org/erddap' protocol = 'tabledap' e = ERDDAP( server=server, protocol=protocol ) e.dataset_id = 'ooi-rs03ccal-mj03f-05-botpta301' e.get_info_url() info_df = pd.read_csv(e.get_info_url(response='csv')) info_df.head() info_df[info...
_____no_output_____
MIT
ooi-botpt.ipynb
tjcrone/ooi-pangeo-virtual-booth-2021
Earthquake catalog from the OOI seismic array at Axial SeamountHere we parse and plot Axial Seamount earthquake catalog data from [William Wilcock's near-real-time automated earthquake location system](http://axial.ocean.washington.edu/). The data we will use is a text file in they HYPO71 output format located here: h...
eqs_url = 'hypo71.dat' col_names = ['ymd', 'hm', 's', 'lat_deg', 'lat_min', 'lon_deg', 'lon_min', 'depth', 'MW', 'NWR', 'GAP', 'DMIN', 'RMS', 'ERH', 'ERZ', 'ID', 'PMom', 'SMom'] eqs = pd.read_csv(eqs_url, sep = '\s+', header=0, names=col_names) eqs.head() from datetime import datetime def parse_hypo_date(ymd,...
_____no_output_____
MIT
ooi-botpt.ipynb
tjcrone/ooi-pangeo-virtual-booth-2021
Load earthquake catalog pickle file
eqs_df = pd.read_pickle('hypo71.pkl') eqs_df.head() eqs_ds = eqs_df.to_xarray() eqs_ds eqs_ds = eqs_ds.set_coords(['lat', 'lon']) eqs_ds eqs_ds.mw.hvplot.scatter(x='time', datashade=True)
_____no_output_____
MIT
ooi-botpt.ipynb
tjcrone/ooi-pangeo-virtual-booth-2021
https://xarray.pydata.org/en/stable/user-guide/combining.html
all_ds = xr.merge([eqs_ds, botpt_ds]) all_ds all_ds.mw.plot(marker='.', linestyle='', markersize=1); all_ds.mw.hvplot.scatter(datashade=True) all_ds.mw.hvplot.scatter(datashade=True, x='time')
_____no_output_____
MIT
ooi-botpt.ipynb
tjcrone/ooi-pangeo-virtual-booth-2021
Daily Counts
daily_count = all_ds.mw.resample(time='1D').count() daily_count daily_count.plot(); fig, ax1 = plt.subplots() ax1.bar(daily_count.to_series()['2015'].index, daily_count.to_series()['2015'].values, width=3) ax2 = ax1.twinx() ax2.plot(botpt_rolling.botpres.to_series()['2015'], color='cyan');
_____no_output_____
MIT
ooi-botpt.ipynb
tjcrone/ooi-pangeo-virtual-booth-2021
Mapping eq dataLet's make some maps just because we can.
import cartopy.crs as ccrs import cartopy import numpy as np caldera = pd.read_csv('caldera.csv') caldera.head() now = pd.Timestamp('now') eqs_sub = eqs[now-pd.Timedelta(weeks=2):] ax = plt.axes(projection = ccrs.Robinson(central_longitude=-130)) ax.plot(caldera.lon, caldera.lat, transform=ccrs.Geodetic()) ax.gridlines...
_____no_output_____
MIT
ooi-botpt.ipynb
tjcrone/ooi-pangeo-virtual-booth-2021
OOI Seafloor Camera DataNow let's look at some video data from the [OOI Seafloor Camera](https://oceanobservatories.org/instrument-class/camhd/) system deployed at Axial Volcano on the Juan de Fuca Ridge. We will make use of the [Pycamhd](https://github.com/tjcrone/pycamhd) library, which can be used to extract frames...
dbcamhd_url = 'https://ooiopendata.blob.core.windows.net/camhd/dbcamhd.json' def show_image(frame_number): plt.rc('figure', figsize=(12, 6)) plt.rcParams.update({'font.size': 8}) frame = camhd.get_frame(mov.url, frame_number) fig, ax = plt.subplots(); im1 = ax.imshow(frame); plt.yticks(np.arange...
_____no_output_____
MIT
ooi-botpt.ipynb
tjcrone/ooi-pangeo-virtual-booth-2021
Data Visualization - Pie Chart: Compare Percentages- Bar Chart: Compare Scores across groups- Histogram: Show frequency of values/value range- Line Chart: Show trend of Scores- Scatter Plot: Show Relationship between a pair of Scores- Map: Show Geo Distribution of data |Type|Variable Y|Variable X||:--:|:--:|:--:||Pie ...
import plotly.plotly as py #Import library and give it an abbreviated name import plotly.graph_objs as go #go: graph object from plotly import tools py.sign_in('USER NAME', 'API TOKEN') #fill in your user name and API token
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
*** Pie Chart
labels = ['Female','Male'] values = [40,20] trace = go.Pie(labels=labels, values=values) py.iplot([trace], filename='pie_chart') #change data labels by re-defining parameter "textinfo" labels = ['Female','Male'] values = [40,20] trace = go.Pie(labels=labels, values=values, textinfo='label+value') py.iplot([trace], ...
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
Practice:--- Please download the Hong Kong census data about educational attainment from this link. Create a pie chart to visualize the percentages of different education levels in 2016. The pie chart should meet following requirements: 1. Donut style 2. Change slice colors
#Write down your code here #---------------------------------------------------------
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
*** Bar ChartFor more details: https://plot.ly/python/reference/bar
x = ['Female','Male'] y = [1.6,1.8] trace = go.Bar(x=x,y=y) py.iplot([trace], filename='bar_chart') #Widen the gap between bars by increasing "bargap" parameters in layout x = ['Female','Male'] y = [40,20] trace = go.Bar(x=x,y=y) layout = go.Layout(bargap=0.5) fig = go.Figure([trace],layout) py.iplot(fig, filename=...
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
Practice:--- Please refer to "Hong Kong Census Educational Attainment.csv". Create a bar chart to visualize the percentages of different education levels in different years, i.e. 2006, 2011 and 2016. The bar chart should meet following requirements: 1. A bar represents a year 2. 100% Stacked bar chart: higher...
#Write down your code here #---------------------------------------------------------
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
*** Break *** HistogramHistogram is a special type of bar chart where one's y value is its count. It is used to show data distribution: viusalize the skewness and central tendency.For more details: https://plot.ly/python/reference/histogram
a=[1,2,3,3,4,4,4,5,5,6,7,3,3,2] trace=go.Histogram(x=a) py.iplot([trace],filename='Histogram') #Change the bins by re-defining "size" parameter in xbins a=[1,2,3,3,4,4,4,5,5,6,7,3,3,2] trace=go.Histogram(x=a,xbins={'size':1}) py.iplot([trace],filename='Histogram') #Convert into a 100% Histogram whose y value is percent...
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
Practice:--- Please download YouTube Popularity data from this link. Create three Histograms to visualize the distribution of views, likes, dislikes and comments. The histograms should meet following requirements: 1. One basic histogram to show distribution of "views" 2. One basic histogram to show distribut...
#Write your code here
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
Line ChartIn Plot.ly, line chart is defined as a special scatter plot whose scatters are connected by lines.For more details: https://plot.ly/python/reference/scatter
#create your first line chart x=[1,2,3] y=[10,22,34] trace1=go.Scatter(x=x,y=y,mode='lines') #mode='lines','markers','lines+markers' py.iplot([trace1],filename='line chart') #add markers to it by changing mode to "lines+markers" x=[1,2,3] y=[10,22,34] trace1=go.Scatter(x=x,y=y,mode='lines+markers') py.iplot([trace1],...
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
Practice:--- Please download stock price data from this link. Create a line chart to visualize the trend of these five listed companies. The line chart should meet following requirements: 1. Name lines after companies
#Write your code here
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
Scatter PlotFor more details: https://plot.ly/python/reference/scatter
#create your first scatter plot x=[1,2,3,4,5] y=[10,22,34,40,50] trace1=go.Scatter(x=x,y=y,mode='markers') py.iplot([trace1],filename='scatter') #style the markers x=[1,2,3,4,5] y=[10,22,34,40,50] trace1=go.Scatter(x=x,y=y,mode='markers',marker={'size':10,'color':'red'}) py.iplot([trace1],filename='scatter') #assign ...
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
Practice:--- Please download box office data from this link. Create a 3D scatter plot to visualize these movies. The scatter plot should meet following requirements: 1. X axis represents "Production Budget" 2. Y axis represents "Box Office" 3. Z axis represents "ROI" (Return on Investment) 4. Size scat...
import pandas as pd movies=pd.read_csv('movies.csv') colors=[] for genre in movies['Genre']: if genre =='Comedy': colors.extend([1]) else: colors.extend([len(genre)]) #Write your code here
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
*** Break *** MapWe will learn two types of maps: scatter map and filled map. Scatter map is to show scattering points on the geo map while filled map is to show the value of a region by changing its color on the map.For more details: https://plot.ly/python/reference/scattermapbox and https://plot.ly/python/reference...
mapbox_token='YOUR TOKEN'
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
Besides, we need to use google map api to search for place's coordinates. So please go to google cloud platform: https://console.cloud.google.com/google/maps-apis and activate Place API.
#install googlemaps library ! pip3 install googlemaps import googlemaps place_api='YOUR TOKEN' client=googlemaps.Client(key=place_api) #create a client variable with your api univs=client.places('universities in hong kong') #search for some places type(univs) #look into the search result. It's a dictionary. univs.keys...
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
2. Filled MapFill regions on the map with certain colors to represent the statistics. This type of map has an academic name of "choropleth map".
import pandas as pd freedom_table=pd.read_csv('https://juniorworld.github.io/python-workshop-2018/doc/human-freedom-index.csv') freedom_table.head() #first column, i.e. iso contry code, can be used to create a map. trace=go.Choropleth( locations=freedom_table['ISO_code'], z=freedom_table['human freedom'...
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
Practice:---Please create a world map representing the GDP values of the countries recorded in freedom_table. The map should meet following requirements: 1. colorscale = Reds 2. projection type: natural earth
#Write your code here
_____no_output_____
MIT
doc/Class4.ipynb
juniorworld/python-workshop-2018
Import Libaries & Define Functions
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import glob sns.set(style='whitegrid') def frame_it(path): csv_files = glob.glob(path + '/*.csv') df_list = [] for filename in csv_files: df = pd.read_csv(filename, index_col='Unnamed: 0', header=0) ...
_____no_output_____
MIT
1-dl-project/dl-9-baseline-model-analysis.ipynb
luiul/statistics-meets-logistics
Analysis
# MODIFY! df = frame_it('./baseline-err') # we tranpose the data frame for the analysis df = df.T # we transpose the data frame due to way we exported the data df_rmse = df.sort_values('RMSE') df_rmse df_rmse.to_csv(f'./analysis/{notebook_name}.csv')
_____no_output_____
MIT
1-dl-project/dl-9-baseline-model-analysis.ipynb
luiul/statistics-meets-logistics
ERR Values [MBit/s] and [(MBit/s)^2]
df_rmse.style.highlight_min(color = 'lightgrey', axis = 0).set_table_styles([{'selector': 'tr:hover','props': [('background-color', '')]}])
_____no_output_____
MIT
1-dl-project/dl-9-baseline-model-analysis.ipynb
luiul/statistics-meets-logistics
RMSE Performance Decline based on Best Performance [%]
df_rmse_min = df_rmse.apply(lambda value : -((value/df.min())-1),axis=1) df_rmse_min = df_rmse_min.sort_values('RMSE',ascending=False) df_rmse_min.to_csv(f'./analysis/{notebook_name}-min.csv') df_rmse_min.style.highlight_max(color = 'lightgrey', axis = 0).set_table_styles([{'selector': 'tr:hover','props': [('background...
_____no_output_____
MIT
1-dl-project/dl-9-baseline-model-analysis.ipynb
luiul/statistics-meets-logistics
RMSE Performance Increment based on Worst Performance [%]
df_rmse_max = df.apply(lambda value : abs((value/df.max())-1),axis=1) df_rmse_max = df_rmse_max.sort_values('RMSE',ascending=False) df_rmse_max.to_csv(f'./analysis/{notebook_name}-max.csv') df_rmse_max.style.highlight_max(color = 'lightgrey', axis = 0).set_table_styles([{'selector': 'tr:hover','props': [('background-co...
_____no_output_____
MIT
1-dl-project/dl-9-baseline-model-analysis.ipynb
luiul/statistics-meets-logistics
Visualization
ax = sns.barplot(data=df_rmse, x='RMSE',y=df_rmse.index, palette='mako') show_values_on_bars(ax, "h", 0.1) ax.set(ylabel='Model',xlabel='RMSE [MBit/s]') ax.tick_params(axis=u'both', which=u'both',length=0) ax.set_title('Baseline Model RMSE'); ax = sns.barplot(data=df_rmse_min,x='RMSE',y=df_rmse_min.index,palette='mako...
_____no_output_____
MIT
1-dl-project/dl-9-baseline-model-analysis.ipynb
luiul/statistics-meets-logistics
# Based on Jupyter Notebook created by Josh Tobin for CS 189 at UC Berkeley %matplotlib inline import matplotlib.pyplot as plt import numpy as np from timeit import default_timer as timer
_____no_output_____
MIT
OLS.ipynb
Animeshrockn/github-slideshow
Part 1: Create a synthetic dataset Let's generate some data from a polynomial $$y_i = -0.1 x_i^4 - 0.4 x_i^3 - 0.5 x_i^2 + 0.5 x_i + 1 + \epsilon_i\mbox{, where }\epsilon_i \sim \mathcal{N}(0,0.1), x_i \sim \mathrm{Uniform}(-1,1)$$
def polynomial(values, coeffs): assert len(values.shape) == 2 # Coeffs are assumed to be in order 0, 1, ..., n-1 expanded = np.hstack([coeffs[i] * (values ** i) for i in range(0, len(coeffs))]) return np.sum(expanded, axis=-1) def polynomial_data(coeffs, n_data=100, x_range=[-1, 1], eps=0.1): x = np...
_____no_output_____
MIT
OLS.ipynb
Animeshrockn/github-slideshow
Let's inspect it
# Good to look at shapes, some values print(x.shape) print(y.shape) print(x[:5]) print(y[:5]) def plot_polynomial(coeffs, x_range=[-1, 1], color='red', label='polynomial', alpha=1.0): values = np.linspace(x_range[0], x_range[1], 1000).reshape([-1, 1]) poly = polynomial(values, coeffs) plt.plot(values, poly,...
_____no_output_____
MIT
OLS.ipynb
Animeshrockn/github-slideshow
Part 2: Ordinary least squares (OLS) Let's code up a naive implementation of the OLS solution $$L(\vec{w}) = \sum_{i=1}^{N} (y_i - \vec{w}^\top\vec{x}_i)^2 = \Vert \vec{y} - X\vec{w} \Vert_2^2$$ $$\tilde{L}(\vec{w}) := {1 \over N}L(\vec{w}) = {1 \over N}\sum_{i=1}^{N} (y_i - \vec{w}^\top\vec{x}_i)^2 \mbox{ ("Mean Squ...
def least_squares(x, y): xTx = x.T.dot(x) xTx_inv = np.linalg.inv(xTx) w = xTx_inv.dot(x.T.dot(y)) return w def avg_loss(x, y, w): y_hat = x.dot(w) loss = np.mean((y - y_hat) ** 2) return loss
_____no_output_____
MIT
OLS.ipynb
Animeshrockn/github-slideshow
How well does it work? $$\hat{y} := wx + b$$ This is equivalent to: $$\hat{y} = \vec{w}^\top\vec{x}\mbox{ where }\vec{w} = \left(\begin{matrix}w \\b \\\end{matrix}\right)\mbox{ and }\vec{x} = \left(\begin{matrix}x \\1 \\\end{matrix}\right)$$
augmented_x = np.hstack([x, np.ones_like(x)]) linear_coeff = least_squares(augmented_x, y) loss = avg_loss(augmented_x, y, linear_coeff) plt.figure(figsize=(10, 5)) plt.scatter(x, y, color='green') plot_polynomial([linear_coeff[1,0], linear_coeff[0,0]]) print(loss)
0.03848975511265433
MIT
OLS.ipynb
Animeshrockn/github-slideshow
Part 3: Polynomial features $$\vec{x} = \left(\begin{matrix}1 \\x \\x^2 \\\vdots \\x^d \\\end{matrix}\right)$$ $$\hat{y} = \vec{w}^\top\vec{x}\mbox{ where }\vec{w} = \sum_{i=1}^{d}w_i x^i$$ Can fit a model that is *non-linear* in the input with *linear* regression!
def polynomial_features(x, order): features = np.hstack([x**i for i in range(0, order+1)]) return features def plot_regression(x, y, degree): start = timer() features = polynomial_features(x, degree) w = least_squares(features, y) loss = avg_loss(features, y, w) end = timer() plt.fi...
_____no_output_____
MIT
OLS.ipynb
Animeshrockn/github-slideshow
Part 4: Hyperparameters What degree of polynomial features should we choose? (Previously we picked 4 because we know how the data was generated, but in practice we don't.) This is known as "model selection", since different hyperparameter choices result in different models, so we are effectively choosing the model.
times = [] errs = [] for degree in range(0, 6): plot_regression(x, y, degree) def plot_losses(losses, label='loss', color='b'): plt.plot(losses, color=color, label=label) plt.semilogy() plt.legend() plt.title(f"Minimum loss achieved at hyperparam value {np.argmin(losses)}") plt.xticks(np.arange(...
_____no_output_____
MIT
OLS.ipynb
Animeshrockn/github-slideshow
Loss never goes up as we increase the degree! Should we choose degree 20?
plot_regression(x, y, 20) plot_regression(x, y, 10) plot_regression(x, y, 5)
_____no_output_____
MIT
OLS.ipynb
Animeshrockn/github-slideshow
Why does this happen? $$\mbox{Recall }\vec{w}^{*} = X^\dagger \vec{y} \mbox{ and } X^\dagger = V \Sigma^{-1} U^\top $$ $$\mbox{Let's take a look at the singular values of }X \mbox{, i.e.: diagonal entries of }\Sigma$$
features_20 = polynomial_features(x, 20) features_20.shape _, singular_values_20, _ = np.linalg.svd(features_20) singular_values_20.min() features_5 = polynomial_features(x, 5) features_5.shape _, singular_values_5, _ = np.linalg.svd(features_5) singular_values_5.min()
_____no_output_____
MIT
OLS.ipynb
Animeshrockn/github-slideshow
$$\mbox{Very small singular value - } X^\top X \mbox{ is close to being non-invertible. As a result, computing }X^\top X \mbox{ and }X^\dagger\mbox{ is numerically unstable.}$$
w_20 = least_squares(features_20, y) np.abs(w_20).max() w_5 = least_squares(features_5, y) np.abs(w_5).max()
_____no_output_____
MIT
OLS.ipynb
Animeshrockn/github-slideshow
$$\mbox{Since }\vec{w}^{*} := \left( X^\top X \right)^{-1} X^\top \vec{y} = X^\dagger \vec{y}\mbox{, small singular values of }X\mbox{ causes }\vec{w}^{*}\mbox{ to have elements that are large in magnitude.}$$ $$\mbox{This is bad - large coordinate values of }\vec{w}^{*}\mbox{ make the prediction sensitive to tiny chan...
np.random.seed(200) x_new, y_new = polynomial_data(coeffs, 50) plt.figure(figsize=(10, 5)) plt.scatter(x, y, color='green') plt.scatter(x_new, y_new, color='blue') def plot_regression_new(x, y, x_new, y_new, degree, y_axis_limits = None): start = timer() features = polynomial_features(x, degree) w = least_s...
_____no_output_____
MIT
OLS.ipynb
Animeshrockn/github-slideshow
High-degree polynomial doesn't generalize as well to new data. Old data which we have access to is known as "training data" or "train data". New data which we don't have access to is known as "testing data" or "test data". Loss on the old data is known as "training loss" or "train loss", loss on the new data is known a...
plot_regression(x, y, 4) plot_regression(x_new, y_new, 4) plot_regression(x, y, 20) plot_regression(x_new, y_new, 20)
_____no_output_____
MIT
OLS.ipynb
Animeshrockn/github-slideshow
This instability is another sign of overfitting. What happens if we had more data?
x_big, y_big = polynomial_data(coeffs, 200) plot_regression(x_big, y_big, 5) plot_regression(x_big, y_big, 10) plot_regression(x_big, y_big, 20)
_____no_output_____
MIT
OLS.ipynb
Animeshrockn/github-slideshow
Back to picking the optimal hyperparameters
train_losses = [] test_losses = [] for degree in range(21): features = polynomial_features(x, degree) w = least_squares(features, y) train_loss = avg_loss(features, y, w) train_losses.append(train_loss) features_new = polynomial_features(x_new, degree) test_loss = avg_loss(features_new, y_new, ...
_____no_output_____
MIT
OLS.ipynb
Animeshrockn/github-slideshow
The difference between the training loss and the testing loss is known as the "generalization gap". Would like to pick the hyperparameter that results in the lowest testing loss - but can't use testing loss for training or model selection! (Otherwise will overfit to the testing set and make the testing set pointless) ...
N_TRAIN = x.shape[0] // 2 x_train, y_train = x[:N_TRAIN], y[:N_TRAIN] x_val, y_val = x[N_TRAIN:], y[N_TRAIN:] train_losses = [] val_losses = [] test_losses = [] for degree in range(21): features_train = polynomial_features(x_train, degree) w = least_squares(features_train, y_train) train_loss = avg_loss(fea...
_____no_output_____
MIT
OLS.ipynb
Animeshrockn/github-slideshow
# Getting the dataset into proper place !mkdir data !cp '/content/drive/My Drive/datasets/dataset.zip' ./ !unzip -qq dataset.zip -d ./data/ !rm dataset.zip # Script to generate the processed.csv file import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg import os label...
_____no_output_____
MIT
Crowd_Counter_(InceptionResnetV2).ipynb
imdeepmind/CrowdCounter
Excercises Electric Machinery Fundamentals Chapter 3 Problem 3-11
%pylab notebook
Populating the interactive namespace from numpy and matplotlib
Unlicense
Chapman/Ch3-Problem_3-11.ipynb
dietmarw/EK5312
Description In later years, motors improved and could be run directly from a 60 Hz power supply. As a result, 25 Hz power systems shrank and disappeared. However, there were many perfectly-good working 25 Hz motors in factories around the country that owners were not ready to discard. To keep them running, some users ...
fse1 = 60. # [Hz] fse2 = 25. # [Hz]
_____no_output_____
Unlicense
Chapman/Ch3-Problem_3-11.ipynb
dietmarw/EK5312
* What combination of poles on the two machines could convert 60 Hz power to 25 Hz power? SOLUTION From Equation, the speed of rotation of the 60 Hz machines would be:$$n_{sm1} = \frac{120f_{se1}}{p_1} = \frac{7200}{p_1}$$and the speed of rotation of the 25 Hz machines would be:$$n_{sm2} = \frac{120f_{se2}}{p_2} = \f...
P1_P2 =(120*fse1) / (120*fse2) P1_P2
_____no_output_____
Unlicense
Chapman/Ch3-Problem_3-11.ipynb
dietmarw/EK5312
Let's take an example where $p_1 = 72$. The mechanical speed for machine 1 is therefore:
p1 = 72 n_m1 = (120*fse1)/p1 n_m1
_____no_output_____
Unlicense
Chapman/Ch3-Problem_3-11.ipynb
dietmarw/EK5312
Calculating the speed for machine 2 gives:
p2 = p1 / P1_P2 n_m2 = (120*fse2)/p2 n_m2
_____no_output_____
Unlicense
Chapman/Ch3-Problem_3-11.ipynb
dietmarw/EK5312
Convolutional Neural Networks: ApplicationWelcome to Course 4's second assignment! In this notebook, you will:- Implement helper functions that you will use when implementing a TensorFlow model- Implement a fully functioning ConvNet using TensorFlow **After this assignment you will be able to:**- Build and train a Con...
import math import numpy as np import h5py import matplotlib.pyplot as plt import scipy from PIL import Image from scipy import ndimage import tensorflow as tf from tensorflow.python.framework import ops from cnn_utils import * %matplotlib inline np.random.seed(1)
_____no_output_____
MIT
Deep Learning Specialisation/Convolutional Neural Networks/Convolution model - Application.ipynb
rakeshbal99/Machine-Learning-Coursera
Run the next cell to load the "SIGNS" dataset you are going to use.
# Loading the data (signs) X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
_____no_output_____
MIT
Deep Learning Specialisation/Convolutional Neural Networks/Convolution model - Application.ipynb
rakeshbal99/Machine-Learning-Coursera
As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of `index` below and re-run to see different examples.
# Example of a picture index = 265 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
y = 2
MIT
Deep Learning Specialisation/Convolutional Neural Networks/Convolution model - Application.ipynb
rakeshbal99/Machine-Learning-Coursera
In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.To get started, let's examine the shapes of your data.
X_train = X_train_orig/255. X_test = X_test_orig/255. Y_train = convert_to_one_hot(Y_train_orig, 6).T Y_test = convert_to_one_hot(Y_test_orig, 6).T print ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = " + str(X_test.shape[0])) print ("X_train shape: " + str(X_train.shape)) ...
number of training examples = 1080 number of test examples = 120 X_train shape: (1080, 64, 64, 3) Y_train shape: (1080, 6) X_test shape: (120, 64, 64, 3) Y_test shape: (120, 6)
MIT
Deep Learning Specialisation/Convolutional Neural Networks/Convolution model - Application.ipynb
rakeshbal99/Machine-Learning-Coursera
1.1 - Create placeholdersTensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.**Exercise**: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the m...
# GRADED FUNCTION: create_placeholders def create_placeholders(n_H0, n_W0, n_C0, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_H0 -- scalar, height of an input image n_W0 -- scalar, width of an input image n_C0 -- scalar, number of channels of the input n_...
X = Tensor("X:0", shape=(?, 64, 64, 3), dtype=float32) Y = Tensor("Y:0", shape=(?, 6), dtype=float32)
MIT
Deep Learning Specialisation/Convolutional Neural Networks/Convolution model - Application.ipynb
rakeshbal99/Machine-Learning-Coursera
**Expected Output** X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32) Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32) 1.2 - Initialize parametersYou will initialize weights/filters $W1$ and $W2$ using `tf.contrib.layers.xavier_initializer(seed = 0)`. You don't need to worry about bias ...
# GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes weight parameters to build a neural network with tensorflow. The shapes are: W1 : [4, 4, 3, 8] W2 : [2, 2, 8, 16] Returns: parameters -- a dictionary of tensors containi...
W1 = [ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 -0.06847463 0.05245192] W2 = [-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 -0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 -0.22779644 -0.1601823 -0.16117483 -0.10286498]
MIT
Deep Learning Specialisation/Convolutional Neural Networks/Convolution model - Application.ipynb
rakeshbal99/Machine-Learning-Coursera
** Expected Output:** W1 = [ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 -0.06847463 0.05245192] W2 = [-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 -0.00577033 -0.14643836 0.24162132...
# GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED Arguments: X -- input dataset placeholder, of shape (input size, number of ex...
Z3 = [[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] [-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]
MIT
Deep Learning Specialisation/Convolutional Neural Networks/Convolution model - Application.ipynb
rakeshbal99/Machine-Learning-Coursera
**Expected Output**: Z3 = [[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] [-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]] 1.3 - Compute costImplement the compute cost function below. You might find these two functions helpful: - **tf.nn.sof...
# GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the c...
cost = 2.91034
MIT
Deep Learning Specialisation/Convolutional Neural Networks/Convolution model - Application.ipynb
rakeshbal99/Machine-Learning-Coursera
**Expected Output**: cost = 2.91034 1.4 Model Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset. You have implemented `random_mini_batches()` in the Optimization programming assignment of course 2. Remember that thi...
# GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009, num_epochs = 100, minibatch_size = 64, print_cost = True): """ Implements a three-layer ConvNet in Tensorflow: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED A...
_____no_output_____
MIT
Deep Learning Specialisation/Convolutional Neural Networks/Convolution model - Application.ipynb
rakeshbal99/Machine-Learning-Coursera
Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
Cost after epoch 0: 1.917929 Cost after epoch 5: 1.506757 Cost after epoch 10: 0.955359 Cost after epoch 15: 0.845802 Cost after epoch 20: 0.701174 Cost after epoch 25: 0.571977 Cost after epoch 30: 0.518435 Cost after epoch 35: 0.495806 Cost after epoch 40: 0.429827 Cost after epoch 45: 0.407291 Cost after epoch 50: 0...
MIT
Deep Learning Specialisation/Convolutional Neural Networks/Convolution model - Application.ipynb
rakeshbal99/Machine-Learning-Coursera
**Expected output**: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease. **Cost after epoch 0 =** 1.917929 **Cost after epoch 5 =** 1.506757 **Train Accuracy =** 0.940741 ...
fname = "images/thumbs_up.jpg" image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(64,64)) plt.imshow(my_image)
_____no_output_____
MIT
Deep Learning Specialisation/Convolutional Neural Networks/Convolution model - Application.ipynb
rakeshbal99/Machine-Learning-Coursera
View source on GitHub Notebook Viewer Run in binder Run in Google Colab ConvolutionsTo perform linear convolutions on images, use `image.convolve()`. The only argument to convolve is an `ee.Kernel` which is specified by a shape and the weights in the kernel. Each pixel of the image output by `convolve()...
import subprocess try: import geehydro except ImportError: print('geehydro package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro']) # Import libraries import ee import folium import geehydro # Authenticate and initialize Earth Engine API try: e...
_____no_output_____
MIT
Image/06_convolutions.ipynb
giswqs/earthengine-py-documentation
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `...
Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID')
_____no_output_____
MIT
Image/06_convolutions.ipynb
giswqs/earthengine-py-documentation
Add Earth Engine Python script
# Load and display an image. image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_044034_20140318') Map.setCenter(-121.9785, 37.8694, 11) Map.addLayer(image, {'bands': ['B5', 'B4', 'B3'], 'max': 0.5}, 'input image') # Define a boxcar or low-pass kernel. # boxcar = ee.Kernel.square({ # 'radius': 7, 'units': 'pixels', 'norm...
_____no_output_____
MIT
Image/06_convolutions.ipynb
giswqs/earthengine-py-documentation
The output of convolution with the low-pass filter should look something like Figure 1. Observe that the arguments to the kernel determine its size and coefficients. Specifically, with the `units` parameter set to pixels, the `radius` parameter specifies the number of pixels from the center that the kernel will cover. ...
Map = folium.Map(location=[40, -100], zoom_start=4) Map.setOptions('HYBRID') # Define a Laplacian, or edge-detection kernel. laplacian = ee.Kernel.laplacian8(1, False) # Apply the edge-detection kernel. edgy = image.convolve(laplacian) Map.addLayer(edgy, {'bands': ['B5', 'B4', 'B3'], 'max': 0.5}, ...
_____no_output_____
MIT
Image/06_convolutions.ipynb
giswqs/earthengine-py-documentation
Note the format specifier in the visualization parameters. Earth Engine sends display tiles to the Code Editor in JPEG format for efficiency, however edge tiles are sent in PNG format to handle transparency of pixels outside the image boundary. When a visual discontinuity results, setting the format to PNG results in a...
# Create a list of weights for a 9x9 kernel. list = [1, 1, 1, 1, 1, 1, 1, 1, 1] # The center of the kernel is zero. centerList = [1, 1, 1, 1, 0, 1, 1, 1, 1] # Assemble a list of lists: the 9x9 kernel weights as a 2-D matrix. lists = [list, list, list, list, centerList, list, list, list, list] # Create the kernel from t...
{'type': 'Kernel.fixed', 'width': 9, 'height': 9, 'weights': '\n [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]\n [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]\n [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]\n [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]\n [1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0]\n [1.0, 1.0...
MIT
Image/06_convolutions.ipynb
giswqs/earthengine-py-documentation
A Decision Tree of Observable Operators Part 1: NEW Observables.> source: http://reactivex.io/documentation/operators.htmltree. > (transcribed to RxPY 1.5.7, Py2.7 / 2016-12, Gunther Klessinger, [axiros](http://www.axiros.com)) **This tree can help you find the ReactiveX Observable operator you’re looking for.** Ta...
reset_start_time(O.just) stream = O.just({'answer': rand()}) disposable = subs(stream) sleep(0.5) disposable = subs(stream) # same answer # all stream ops work, its a real stream: disposable = subs(stream.map(lambda x: x.get('answer', 0) * 2))
========== return_value ========== module rx.linq.observable.returnvalue @extensionclassmethod(Observable, alias="just") def return_value(cls, value, scheduler=None): Returns an observable sequence that contains a single element, using the specified scheduler to send out observer messages. There is an al...
MIT
notebooks/reactivex.io/A Decision Tree of Observable Operators. Part I - Creation.ipynb
christiansandberg/RxPY
..that was returned from a function *called at subscribe-time*: **[start](http://reactivex.io/documentation/operators/start.html)**
print('There is a little API difference to RxJS, see Remarks:\n') rst(O.start) def f(): log('function called') return rand() stream = O.start(func=f) d = subs(stream) d = subs(stream) header("Exceptions are handled correctly (an observable should never except):") def breaking_f(): return 1 / 0 stre...
There is a little API difference to RxJS, see Remarks: ========== start ========== module rx.linq.observable.start @extensionclassmethod(Observable) def start(cls, func, scheduler=None): Invokes the specified function asynchronously on the specified scheduler, surfacing the result through an observable sequ...
MIT
notebooks/reactivex.io/A Decision Tree of Observable Operators. Part I - Creation.ipynb
christiansandberg/RxPY
..that was returned from an Action, Callable, Runnable, or something of that sort, called at subscribe-time: **[from](http://reactivex.io/documentation/operators/from.html)**
rst(O.from_iterable) def f(): log('function called') return rand() # aliases: O.from_, O.from_list # 1.: From a tuple: stream = O.from_iterable((1,2,rand())) d = subs(stream) # d = subs(stream) # same result # 2. from a generator gen = (rand() for j in range(3)) stream = O.from_iterable(gen) d = subs(stream) ...
========== from_callback ========== module rx.linq.observable.fromcallback @extensionclassmethod(Observable) def from_callback(cls, func, mapper=None): Converts a callback function to an observable sequence. Keyword arguments: func -- {Function} Function with a callback as the last parameter to ...
MIT
notebooks/reactivex.io/A Decision Tree of Observable Operators. Part I - Creation.ipynb
christiansandberg/RxPY
...after a specified delay: **[timer](http://reactivex.io/documentation/operators/timer.html)**
rst() # start a stream of 0, 1, 2, .. after 200 ms, with a delay of 100 ms: stream = O.timer(200, 100).time_interval()\ .map(lambda x: 'val:%s dt:%s' % (x.value, x.interval))\ .take(3) d = subs(stream, name='observer1') # intermix directly with another one d = subs(stream, name='observer2')
0.8 M New subscription on stream 274470005 3.4 M New subscription on stream 274470005
MIT
notebooks/reactivex.io/A Decision Tree of Observable Operators. Part I - Creation.ipynb
christiansandberg/RxPY
...that emits a sequence of items repeatedly: **[repeat](http://reactivex.io/documentation/operators/repeat.html) **
rst(O.repeat) # repeat is over *values*, not function calls. Use generate or create for function calls! subs(O.repeat({'rand': time.time()}, 3)) header('do while:') l = [] def condition(x): l.append(1) return True if len(l) < 2 else False stream = O.just(42).do_while(condition) d = subs(stream)
========== repeat ========== module rx.linq.observable.repeat @extensionclassmethod(Observable) def repeat(cls, value=None, repeat_count=None, scheduler=None): Generates an observable sequence that repeats the given element the specified number of times, using the specified scheduler to send out observer...
MIT
notebooks/reactivex.io/A Decision Tree of Observable Operators. Part I - Creation.ipynb
christiansandberg/RxPY
...from scratch, with custom logic and cleanup (calling a function again and again): **[create](http://reactivex.io/documentation/operators/create.html) **
rx = O.create rst(rx) def f(obs): # this function is called for every observer obs.on_next(rand()) obs.on_next(rand()) obs.on_completed() def cleanup(): log('cleaning up...') return cleanup stream = O.create(f).delay(200) # the delay causes the cleanup called before the subs gets the va...
========== generate_with_relative_time ========== module rx.linq.observable.generatewithrelativetime @extensionclassmethod(Observable) def generate_with_relative_time(cls, initial_state, condition, iterate, Generates an observable sequence by iterating a state from an initial state until the condition fails....
MIT
notebooks/reactivex.io/A Decision Tree of Observable Operators. Part I - Creation.ipynb
christiansandberg/RxPY
...for each observer that subscribes OR according to a condition at subscription time: **[defer / if_then](http://reactivex.io/documentation/operators/defer.html) **
rst(O.defer) # plural! (unique per subscription) streams = O.defer(lambda: O.just(rand())) d = subs(streams) d = subs(streams) # gets other values - created by subscription! # evaluating a condition at subscription time in order to decide which of two streams to take. rst(O.if_then) cond = True def should_run(): re...
========== if_then ========== module rx.linq.observable.ifthen @extensionclassmethod(Observable) def if_then(cls, condition, then_source, else_source=None, scheduler=None): Determines whether an observable collection contains values. Example: 1 - res = reactivex.Observable.if(condition, obs1) 2 - re...
MIT
notebooks/reactivex.io/A Decision Tree of Observable Operators. Part I - Creation.ipynb
christiansandberg/RxPY
...that emits a sequence of integers: **[range](http://reactivex.io/documentation/operators/range.html) **
rst(O.range) d = subs(O.range(0, 3))
========== range ========== module rx.linq.observable.range @extensionclassmethod(Observable) def range(cls, start, count, scheduler=None): Generates an observable sequence of integral numbers within a specified range, using the specified scheduler to send out observer messages. 1 - res = reactivex....
MIT
notebooks/reactivex.io/A Decision Tree of Observable Operators. Part I - Creation.ipynb
christiansandberg/RxPY
...at particular intervals of time: **[interval](http://reactivex.io/documentation/operators/interval.html) **(you can `.publish()` it to get an easy "hot" observable)
rst(O.interval) d = subs(O.interval(100).time_interval()\ .map(lambda x, v: '%(interval)s %(value)s' \ % ItemGetter(x)).take(3))
========== interval ========== module rx.linq.observable.interval @extensionclassmethod(Observable) def interval(cls, period, scheduler=None): Returns an observable sequence that produces a value after each period. Example: 1 - res = reactivex.Observable.interval(1000) 2 - res = reactivex.Observ...
MIT
notebooks/reactivex.io/A Decision Tree of Observable Operators. Part I - Creation.ipynb
christiansandberg/RxPY
...after a specified delay (see timer) ...that completes without emitting items: **[empty](http://reactivex.io/documentation/operators/empty-never-throw.html) **
rst(O.empty) d = subs(O.empty())
========== empty ========== module rx.linq.observable.empty @extensionclassmethod(Observable) def empty(cls, scheduler=None): Returns an empty observable sequence, using the specified scheduler to send out the single OnCompleted message. 1 - res = reactivex.empty() 2 - res = reactivex.empty(rx.Sched...
MIT
notebooks/reactivex.io/A Decision Tree of Observable Operators. Part I - Creation.ipynb
christiansandberg/RxPY
...that does nothing at all: **[never](http://reactivex.io/documentation/operators/empty-never-throw.html) **
rst(O.never) d = subs(O.never())
========== never ========== 0.7 T18 [next] 104.4: 0 -1.0 (observer hash: 274473797) 1.1 T18 [next] 104.8: 1 -0.5 (observer hash: 274473797)module rx.linq.observable.never @extensionclassmethod(Observable) def never(cls): Returns a non-terminating observable sequence, which can be used to denote a...
MIT
notebooks/reactivex.io/A Decision Tree of Observable Operators. Part I - Creation.ipynb
christiansandberg/RxPY
...that excepts: **[throw](http://reactivex.io/documentation/operators/empty-never-throw.html) **
rst(O.on_error) d = subs(O.on_error(ZeroDivisionError))
========== throw ========== module rx.linq.observable.throw @extensionclassmethod(Observable, alias="throw_exception") def on_error(cls, exception, scheduler=None): Returns an observable sequence that terminates with an exception, using the specified scheduler to send out the single OnError message. 1 -...
MIT
notebooks/reactivex.io/A Decision Tree of Observable Operators. Part I - Creation.ipynb
christiansandberg/RxPY
`networkx` supports a lot of graph types: simple graph, simple digraph (parallel edges are not acceptable), multidigraph, multigraphs. Read more [here](https://networkx.github.io/documentation/stable/reference/classes/index.html). For sure, we want to use multidigraph to handle our maps
G = nx.MultiDiGraph() # add arbitrary nodes nodes = [i for i in range(1,20)] G.add_nodes_from(nodes) num_edges = 70 for _ in range(num_edges): u = random.randint(1, 101) v = random.randint(1,101) G.add_edge(u, v, weight = 5) nx.draw(G)
_____no_output_____
Apache-2.0
networkx.ipynb
sierraone/GettingStarted
One thing that you have probably noticed that the graph that was returned by `osmnx` in the first tutorial could be dealt with as if it was a pure vanilla `networkx` graph. That is a very good news for us, because we can use [`networkx` utilities and algorithms](https://networkx.github.io/documentation/stable/reference...
nx.number_strongly_connected_components(G) # we will take the first 10 simple cycles for cycle in itertools.islice(nx.simple_cycles(G), 10): print(cycle) # it returns an iterators so we unpack it print(*nx.all_pairs_shortest_path(G))
(1, {1: [1], 16: [1, 16], 70: [1, 70], 79: [1, 79], 69: [1, 69], 53: [1, 53], 48: [1, 16, 48], 56: [1, 16, 56], 9: [1, 16, 9], 81: [1, 16, 81], 4: [1, 16, 4], 6: [1, 16, 6], 37: [1, 70, 37], 84: [1, 70, 84], 64: [1, 70, 64], 52: [1, 70, 52], 3: [1, 79, 3], 8: [1, 79, 8], 22: [1, 79, 22], 93: [1, 69, 93], 46: [1, 69, 46...
Apache-2.0
networkx.ipynb
sierraone/GettingStarted
Okay let's load UofT map again and see what can we do
G = ox.graph_from_address("university of toronto", dist = 300) fig, ax = ox.plot_graph(G) # here are the nodes [*G.nodes()] # this will take a while # this can come in handy in a lot of case studies for distance_to_others in itertools.islice(nx.all_pairs_shortest_path(G), 5): # we will take only five print(distance...
(130170945, {130170945: [130170945], 55808527: [130170945, 55808527], 389677905: [130170945, 389677905], 127284680: [130170945, 127284680], 127284677: [130170945, 127284677], 55808564: [130170945, 55808527, 55808564], 55808512: [130170945, 55808527, 55808512], 2143434279: [130170945, 55808527, 2143434279], 3996671926: ...
Apache-2.0
networkx.ipynb
sierraone/GettingStarted
English
response = requests.get(videos_url_en) page = BeautifulSoup(response.text, 'html5lib') content_divs = page.find_all('div', class_='content-inner') len(content_divs) for content_div in content_divs: video_block = content_div.find('div', class_='video-block') video_wrapper = video_block.find('div', class_='sqs-v...
_____no_output_____
MIT
notebooks/3. Web Scraping Starter.ipynb
NathanMaton/forked_sushi_chef
Burmese
response = requests.get(videos_url_my) page2 = BeautifulSoup(response.text, 'html5lib') content_divs2 = page2.find_all('div', class_='content-inner') len(content_divs2) for content_div in content_divs2: video_block = content_div.find('div', class_='video-block') video_wrapper = video_block.find('div', class_='...
https://player.vimeo.com/video/262570817?app_id=122963&wmode=opaque https://player.vimeo.com/video/262755072?app_id=122963&wmode=opaque https://player.vimeo.com/video/262755467?app_id=122963&wmode=opaque https://player.vimeo.com/video/262755673?app_id=122963&wmode=opaque https://player.vimeo.com/video/267661918?app_id=...
MIT
notebooks/3. Web Scraping Starter.ipynb
NathanMaton/forked_sushi_chef
**This is an example Notebook for running training on Higgs vs background signal classification. ** **Background:** High-energy collisions at the Large Hadron Collider (LHC) produce particles that interact with particle detectors. One important task is to classify different types of collisions based on their physics co...
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz
--2022-02-25 23:13:36-- https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz Resolving archive.ics.uci.edu (archive.ics.uci.edu)... 128.195.10.252 Connecting to archive.ics.uci.edu (archive.ics.uci.edu)|128.195.10.252|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 281...
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
2. Unzip the dataset folder
!gzip -d HIGGS.csv.gz from sklearn.datasets import make_gaussian_quantiles from sklearn.ensemble import AdaBoostClassifier from sklearn.metrics import accuracy_score from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import confusion_matrix from sklearn.model_selection import train_test_split import p...
_____no_output_____
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
**Load the file using pandas library**
data=pd.read_csv('./HIGGS.csv') data
_____no_output_____
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
Assign first column 0 to class labels (labeled 1 for signal, 0 for background) and all others to feature matrix X.In this example, for the sake of fast checking, we use 1000 samples. To train on the entire dataset, proceed with uncommenting the lines below.
X=data.iloc[:1000,1:]#data.iloc[:,1:] y=data.iloc[:1000,0]#data.iloc[:,0]
_____no_output_____
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
Split your data into training and validation samples where the fraction of the data used for validation is 33%.
X_train1, X_val, y_train1, y_val = train_test_split(X, y, test_size=0.2, random_state=42) X_train, X_test, y_train, y_test = train_test_split(X_train1, y_train1, test_size=0.2, random_state=42)
_____no_output_____
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
**Visualize your data - One histogram per feature column** Detailed information on what each feature column is can be found in *Attribute Information* section on the [UCI Machine learning Repositery](https://archive.ics.uci.edu/ml/datasets/HIGGS). For further information, refer to the [paper](https://www.nature.com/art...
from itertools import combinations import matplotlib.pyplot as plt fig, axes = plt.subplots(len(X_train.columns)//3, 3, figsize=(12, 48)) i = 0 for triaxis in axes: for axis in triaxis: X_train.hist(column = X_train.columns[i], bins = 100, ax=axis) i = i+1
_____no_output_____
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
**Setup the Boosted Decision Tree model** (BDT explanation [here](https://docs.microsoft.com/en-us/azure/machine-learning/algorithm-module-reference/boosted-decision-tree-regression:~:text=Boosting%20means%20that%20each%20tree,small%20risk%20of%20less%20coverage.))
classifier = AdaBoostClassifier( DecisionTreeClassifier(max_depth=1), n_estimators=200 )
_____no_output_____
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
**Train the Boosted Decision Tree model**
from sklearn.ensemble import AdaBoostClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.datasets import load_breast_cancer import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix from sklearn.preprocessing import Label...
_____no_output_____
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
**Predict on new testing data**
predictions = classifier.predict(X_test)
_____no_output_____
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
**Print confusion matrix which describes the performance of the model classification by displaying the number of True Positives, True Negatives, False Positives and False Negatives. More info on [Wikipedia](https://en.wikipedia.org/wiki/Confusion_matrix)**
confusion_matrix(y_test, predictions)
_____no_output_____
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
**Setup the Neural Network** (some useful info [here](https://towardsdatascience.com/a-gentle-introduction-to-neural-networks-series-part-1-2b90b87795bc))
from numpy import loadtxt from keras.models import Sequential from keras.layers import Dense model_nn = Sequential() model_nn.add(Dense(28, input_dim=28, activation='relu')) model_nn.add(Dense(8, activation='relu')) model_nn.add(Dense(1, activation='sigmoid'))
_____no_output_____
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
**Train the Neural Network and save your model weights in a h5 file**
# compile the keras model model_nn.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # fit the keras model on the dataset history=model_nn.fit(X, y,validation_data=(X_val,y_val),epochs=5, batch_size=10) # evaluate the keras model _, accuracy = model_nn.evaluate(X, y) model_nn.save('my_model.h5...
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
**Plot accuracy wrt number of epochs**
# summarize history for accuracy plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show()
_____no_output_____
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
**Plot training loss wrt number of epochs**
# summarize history for loss plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() y_pred=model_nn.predict(X_test) confusion_matrix(y_test, y_pred.round())
_____no_output_____
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
**Plot the ROC (Receiver Operating Characteristic) Curve** (more info on ROC could be found [here](https://en.wikipedia.org/wiki/Receiver_operating_characteristic)
!pip install plot-metric from plot_metric.functions import BinaryClassification # Visualisation with plot_metric bc = BinaryClassification(y_pred.round(), y_test, labels=["Class 1", "Class 2"]) # Figures plt.figure(figsize=(5,5)) bc.plot_roc_curve() plt.show()
_____no_output_____
Apache-2.0
Higgs_Classification/higgs_classification.ipynb
MonitSharma/MS_Thesis
Combining DataPractice combining data from two different data sets. In the same folder as this Jupyter notebook, there are two csv files:* rural_population_percent.csv* electricity_access_percent.csvThey both come from the World Bank Indicators data. * https://data.worldbank.org/indicator/SP.RUR.TOTL.ZS* https://data....
pd.read_json('http://api.worldbank.org/v2/indicator/SP.RUR.TOTL.ZS/?format=json') # TODO: import the pandas library import pandas as pd # TODO: read in each csv file into a separate variable # HINT: remember from the Extract material that these csv file have some formatting issues # HINT: The file paths are 'rural_pop...
_____no_output_____
MIT
lessons/ETLPipelines/5_combinedata_exercise/.ipynb_checkpoints/5_combining_data-checkpoint.ipynb
GooseHuang/Udacity-Data-Scientist-Nanodegree
Exercise 2 (Challenge)This exercise is more challenging.The resulting data frame should look like this:|Country Name|Country Code|Year|Rural_Value|Electricity_Value||--|--|--|--|--|--||Aruba|ABW|1960|49.224|49.239|... etc.Order the results in the dataframe by country and then by yearHere are a few pandas methods that ...
# TODO: merge the data sets together according to the instructions. First, use the # melt method to change the formatting of each data frame so that it looks like this: # Country Name, Country Code, Year, Rural Value # Country Name, Country Code, Year, Electricity Value # TODO: drop any columns from the data frames t...
_____no_output_____
MIT
lessons/ETLPipelines/5_combinedata_exercise/.ipynb_checkpoints/5_combining_data-checkpoint.ipynb
GooseHuang/Udacity-Data-Scientist-Nanodegree