markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Simulating Thousands of Possible Allocations
stocks.head() stock_normed = stocks/stocks.iloc[0] stock_normed.plot() stock_daily_ret = stocks.pct_change(1) stock_daily_ret.head()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/02-Portfolio-Optimization.ipynb
arcyfelix/Courses
apache-2.0
Log Returns vs Arithmetic Returns We will now switch over to using log returns instead of arithmetic returns, for many of our use cases they are almost the same,but most technical analyses require detrending/normalizing the time series and using log returns is a nice way to do that. Log returns are convenient to work with in many of the algorithms we will encounter. For a full analysis of why we use log returns, check this great article.
log_ret = np.log(stocks / stocks.shift(1)) log_ret.head() log_ret.hist(bins = 100, figsize = (12, 6)); plt.tight_layout() log_ret.describe().transpose() log_ret.mean() * 252 # Compute pairwise covariance of columns log_ret.cov() log_ret.cov() * 252 # multiply by days
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/02-Portfolio-Optimization.ipynb
arcyfelix/Courses
apache-2.0
Single Run for Some Random Allocation
# Set seed (optional) np.random.seed(101) # Stock Columns print('Stocks') print(stocks.columns) print('\n') # Create Random Weights print('Creating Random Weights') weights = np.array(np.random.random(4)) print(weights) print('\n') # Rebalance Weights print('Rebalance to sum to 1.0') weights = weights / np.sum(weights) print(weights) print('\n') # Expected Return print('Expected Portfolio Return') exp_ret = np.sum(log_ret.mean() * weights) *252 print(exp_ret) print('\n') # Expected Variance print('Expected Volatility') exp_vol = np.sqrt(np.dot(weights.T, np.dot(log_ret.cov() * 252, weights))) print(exp_vol) print('\n') # Sharpe Ratio SR = exp_ret/exp_vol print('Sharpe Ratio') print(SR)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/02-Portfolio-Optimization.ipynb
arcyfelix/Courses
apache-2.0
Great! Now we can just run this many times over!
num_ports = 15000 all_weights = np.zeros((num_ports, len(stocks.columns))) ret_arr = np.zeros(num_ports) vol_arr = np.zeros(num_ports) sharpe_arr = np.zeros(num_ports) for ind in range(num_ports): # Create Random Weights weights = np.array(np.random.random(4)) # Rebalance Weights weights = weights / np.sum(weights) # Save Weights all_weights[ind,:] = weights # Expected Return ret_arr[ind] = np.sum((log_ret.mean() * weights) *252) # Expected Variance vol_arr[ind] = np.sqrt(np.dot(weights.T, np.dot(log_ret.cov() * 252, weights))) # Sharpe Ratio sharpe_arr[ind] = ret_arr[ind] / vol_arr[ind] sharpe_arr.max() sharpe_arr.argmax() all_weights[1419,:] max_sr_ret = ret_arr[1419] max_sr_vol = vol_arr[1419]
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/02-Portfolio-Optimization.ipynb
arcyfelix/Courses
apache-2.0
Plotting the data
plt.figure(figsize = (12, 8)) plt.scatter(vol_arr, ret_arr, c = sharpe_arr, cmap = 'plasma') plt.colorbar(label = 'Sharpe Ratio') plt.xlabel('Volatility') plt.ylabel('Return') # Add red dot for max SR plt.scatter(max_sr_vol, max_sr_ret, c = 'red', s = 50, edgecolors = 'black')
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/02-Portfolio-Optimization.ipynb
arcyfelix/Courses
apache-2.0
Mathematical Optimization There are much better ways to find good allocation weights than just guess and check! We can use optimization functions to find the ideal weights mathematically! Functionalize Return and SR operations
def get_ret_vol_sr(weights): """ Takes in weights, returns array or return,volatility, sharpe ratio """ weights = np.array(weights) ret = np.sum(log_ret.mean() * weights) * 252 vol = np.sqrt(np.dot(weights.T, np.dot(log_ret.cov() * 252, weights))) sr = ret/vol return np.array([ret, vol, sr]) from scipy.optimize import minimize
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/02-Portfolio-Optimization.ipynb
arcyfelix/Courses
apache-2.0
To fully understand all the parameters, check out: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
help(minimize)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/02-Portfolio-Optimization.ipynb
arcyfelix/Courses
apache-2.0
Optimization works as a minimization function, since we actually want to maximize the Sharpe Ratio, we will need to turn it negative so we can minimize the negative sharpe (same as maximizing the postive sharpe)
def neg_sharpe(weights): return get_ret_vol_sr(weights)[2] * -1 # Contraints def check_sum(weights): ''' Returns 0 if sum of weights is 1.0 ''' return np.sum(weights) - 1 # By convention of minimize function it should be a function that returns zero for conditions cons = ({'type' : 'eq', 'fun': check_sum}) # 0-1 bounds for each weight bounds = ((0, 1), (0, 1), (0, 1), (0, 1)) # Initial Guess (equal distribution) init_guess = [0.25, 0.25, 0.25, 0.25] # Sequential Least SQuares Programming (SLSQP). opt_results = minimize(neg_sharpe, init_guess, method = 'SLSQP', bounds = bounds, constraints = cons) opt_results opt_results.x get_ret_vol_sr(opt_results.x)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/02-Portfolio-Optimization.ipynb
arcyfelix/Courses
apache-2.0
All Optimal Portfolios (Efficient Frontier) The efficient frontier is the set of optimal portfolios that offers the highest expected return for a defined level of risk or the lowest risk for a given level of expected return. Portfolios that lie below the efficient frontier are sub-optimal, because they do not provide enough return for the level of risk. Portfolios that cluster to the right of the efficient frontier are also sub-optimal, because they have a higher level of risk for the defined rate of return. Efficient Frontier http://www.investopedia.com/terms/e/efficientfrontier
# Our returns go from 0 to somewhere along 0.3 # Create a linspace number of points to calculate x on frontier_y = np.linspace(0, 0.3, 100) # Change 100 to a lower number for slower computers! def minimize_volatility(weights): return get_ret_vol_sr(weights)[1] frontier_volatility = [] for possible_return in frontier_y: # function for return cons = ({'type':'eq','fun': check_sum}, {'type':'eq','fun': lambda w: get_ret_vol_sr(w)[0] - possible_return}) result = minimize(minimize_volatility, init_guess, method = 'SLSQP', bounds = bounds, constraints = cons) frontier_volatility.append(result['fun']) plt.figure(figsize = (12, 8)) plt.scatter(vol_arr, ret_arr, c = sharpe_arr, cmap = 'plasma') plt.colorbar(label = 'Sharpe Ratio') plt.xlabel('Volatility') plt.ylabel('Return') # Add frontier line plt.plot(frontier_volatility, frontier_y, 'g--', linewidth = 3)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/09-Python-Finance-Fundamentals/02-Portfolio-Optimization.ipynb
arcyfelix/Courses
apache-2.0
2. Read in the hanford.csv file
df = pd.read_csv("hanford.csv") df.head()
class6/donow/honjingyi_donow_6.ipynb
ledeprogram/algorithms
gpl-3.0
5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
lm = smf.ols(formula="Mortality~Exposure",data=df).fit() lm.params intercept, slope = lm.params
class6/donow/honjingyi_donow_6.ipynb
ledeprogram/algorithms
gpl-3.0
6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
df.plot(kind="scatter",x="Exposure",y="Mortality") plt.plot(df["Exposure"],slope*df["Exposure"]+intercept,"-",color="red")
class6/donow/honjingyi_donow_6.ipynb
ledeprogram/algorithms
gpl-3.0
7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
mortality_rate = slope * 10 + intercept mortality_rate
class6/donow/honjingyi_donow_6.ipynb
ledeprogram/algorithms
gpl-3.0
First I'm going to define a function which gets us an attendee's spending at random. Don't worry about the implementation, just check the printed sample of 100 values generated and see if it seems realistic. I think it is, but we can argue about this, sure. You can adjust the algorithm and see the numbers with what you feel is a realistic distribution.
def get_spending_of_attendee(): if random.random() < 0.03: # Let's say 3% doesn't even care about the secret shop return 0 return int((random.paretovariate(2) - 0.5) * 100) print([get_spending_of_attendee() for _ in range(100)])
viability.ipynb
Arcana/emoticharms.trade
gpl-2.0
Packs are identified by an ID of 0 to 5, and this function just hands out an attendee's packs based on their spending.
def get_packs(spending): return [random.randint(0, 5) for _ in range(math.floor(spending / 50))]
viability.ipynb
Arcana/emoticharms.trade
gpl-2.0
So here's the meat of the notebook. We get a random spending, the packs for that, and try to redeem the stickers for the digital unlock. If we can't do that because of dupes, we increment our userbase by one. If someone is not constrained by dupes, they have no use for our site. This includes: People who have less than 6 stickers in total People who were lucky enough to be able complete to complete their book on their own
for _ in range(ATTENDEES): spending = get_spending_of_attendee() packs = get_packs(spending) try: while len(packs) > 5: for pack_id in range(6): packs.remove(pack_id) except ValueError: eligible_users.append(packs) print(len(eligible_users))
viability.ipynb
Arcana/emoticharms.trade
gpl-2.0
So, yeah. We have around 1,280 people who could even use the site. But: We can't let all of them know that the site exists. Since there's lots of people coming from abroad, most won't even have convenient Internet access. Not all of them are going to bother (some of them already have one unlock redeemed). If they are a group of friends they can possibly solve the issue without us. After all this, we're lucky to have say, a userbase of 500. However:
pack_count = sum(len(get_packs(get_spending_of_attendee())) for _ in range(ATTENDEES)) print(pack_count)
viability.ipynb
Arcana/emoticharms.trade
gpl-2.0
Lets see what dataset we have loaded
df.education.unique() #school, college, bachelor, master df['education'] = df['education'].replace(['High School or Below'], 'school') df['education'] = df['education'].replace(['Bechalor'], 'bachelor') df['education'] = df['education'].replace(['Master or Above'], 'master') df.to_csv('datasets/dataset.csv') df[:10]
cleaning_up_dataset.ipynb
skcript/normalization
gpl-3.0
We don't need all the columns. Right? Let's drop the unneccessary things.
df = df.drop(['loan_id', 'effective_date', 'due_date', 'paid_off_time', 'past_due_days'], axis = 1) # The dataframe holds the needed columns now. Cool. df[:10]
cleaning_up_dataset.ipynb
skcript/normalization
gpl-3.0
In order to get the information of the whole dataframe, use info()
df.info() # Lets clean the data and create columns if needed. df['Gender'].unique()
cleaning_up_dataset.ipynb
skcript/normalization
gpl-3.0
We can see that our dateset has two unique string values for GENDER. We can't assign numeric values like female = 1 and male = 2 because of feminism. Just kidding. We shouldn't assign because then they will be a factor that denotes intensity. We want to differentiate our category. So we are going to have sepeate columns for two genders. Create df with two dummy columns named of genders.
df_sex = pd.get_dummies(df['Gender']) df_sex[:10] df = pd.concat([df,df_sex] , axis=1) df[:10] # Now drop the gender column from the main df and add df_sex to df df = df.drop(['Gender'], axis=1) df[:10] # Similary lets do the same process for both load_status and education. # This process is called Categorical Conversion into Numerics of One-hot-coding df_loan_status = pd.get_dummies(df['loan_status']) df_education = pd.get_dummies(df['education']) df = pd.concat([df, df_loan_status], axis=1) df = pd.concat([df, df_education], axis=1) df = df.drop(['loan_status', 'education'], axis=1) df[:10] df.info()
cleaning_up_dataset.ipynb
skcript/normalization
gpl-3.0
In machine learning, its always easier to compute if the values are between 1 to 0 (either positive or negative) The process of converting them into such values is called normalization There are many ways to normalize them. Here I choose MinMaxScalar which converts the highest value to 1 and smallest value to 0. Remaining values exist between 0 to 1
df_to_norm = df[['Principal', 'terms', 'age']] df_to_norm[:10] df_norm = (df_to_norm - df_to_norm.min()) / (df_to_norm.max() - df_to_norm.min()) df_norm[:10] df = df.drop(['Principal', 'terms', 'age'], axis=1) df = pd.concat([df,df_norm], axis=1) df[:10]
cleaning_up_dataset.ipynb
skcript/normalization
gpl-3.0
Customer Churn Analysis Churn rate, when applied to a customer base, refers to the proportion of contractual customers or subscribers who leave a supplier during a given time period. It is a possible indicator of customer dissatisfaction, cheaper and/or better offers from the competition, more successful sales and/or marketing by the competition, or reasons having to do with the customer life cycle. Churn is closely related to the concept of average customer life time. For example, an annual churn rate of 25 percent implies an average customer life of four years. An annual churn rate of 33 percent implies an average customer life of three years. The churn rate can be minimized by creating barriers which discourage customers to change suppliers (contractual binding periods, use of proprietary technology, value-added services, unique business models, etc.), or through retention activities such as loyalty programs. It is possible to overstate the churn rate, as when a consumer drops the service but then restarts it within the same year. Thus, a clear distinction needs to be made between "gross churn", the total number of absolute disconnections, and "net churn", the overall loss of subscribers or members. The difference between the two measures is the number of new subscribers or members that have joined during the same period. Suppliers may find that if they offer a loss-leader "introductory special", it can lead to a higher churn rate and subscriber abuse, as some subscribers will sign on, let the service lapse, then sign on again to take continuous advantage of current specials. https://en.wikipedia.org/wiki/Churn_rate
%%capture import numpy as np import pandas as pd import h2o from h2o.automl import H2OAutoML from __future__ import print_function import pandas_profiling # Suppress unwatned warnings import warnings warnings.filterwarnings('ignore') import logging logging.getLogger("requests").setLevel(logging.WARNING) # Load our favorite visualization library import os import plotly import plotly.plotly as py import plotly.figure_factory as ff import plotly.graph_objs as go import cufflinks as cf plotly.offline.init_notebook_mode(connected=True) # Sign into Plotly with masked, encrypted API key myPlotlyKey = os.environ['SECRET_ENV_BRETTS_PLOTLY_KEY'] py.sign_in(username='bretto777',api_key=myPlotlyKey)
model_management/Customer Churn with AutoML-Copy1.ipynb
Brett777/Predict-Churn
mit
Load The Dataset
# Load some data churnDF = pd.read_csv('https://trifactapro.s3.amazonaws.com/churn.csv', delimiter=',') churnDF.head(5) #%%capture #pandas_profiling.ProfileReport(churnDF)
model_management/Customer Churn with AutoML-Copy1.ipynb
Brett777/Predict-Churn
mit
Scatterplot Matrix
# separate the calls data for plotting churnDFs = churnDF.sample(frac=0.07) # Sample for speedy viz churnDFs = churnDFs.replace([True,False],["Churn","Retain"]) churnDFs = churnDFs[['Account Length','Day Calls','Eve Calls','CustServ Calls','Churn']] # Create scatter plot matrix of call data splom = ff.create_scatterplotmatrix(churnDFs, diag='histogram', index='Churn', colormap= dict( Churn = '#9CBEF1', Retain = '#04367F' ), colormap_type='cat', height=560, width=650, size=4, marker=dict(symbol='circle')) py.iplot(splom) churnDF["Churn"] = churnDF["Churn"].replace([True, False],[1,0]) churnDF["Int'l Plan"] = churnDF["Int'l Plan"].replace(["no","yes"],[0,1]) churnDF["VMail Plan"] = churnDF["VMail Plan"].replace(["no","yes"],[0,1]) churnDF.drop(["State", "Area Code", "Phone"], axis=1, inplace=True) %%capture #h2o.connect(ip="35.225.239.147") h2o.init(nthreads=1, max_mem_size="768m") %%capture # Split data into training and testing frames from sklearn import cross_validation from sklearn.model_selection import train_test_split training, testing = train_test_split(churnDF, train_size=0.8, stratify=churnDF["Churn"], random_state=9) x_train = training.drop(["Churn"], axis = 1) y_train = training["Churn"] x_test = testing.drop(["Churn"], axis = 1) y_test = testing["Churn"] train = h2o.H2OFrame(python_obj=training) test = h2o.H2OFrame(python_obj=testing) # Set predictor and response variables y = "Churn" x = train.columns x.remove(y) x_train = x_train.values y_train = y_train.values x_test = x_test.values y_test = y_test.values from sklearn.ensemble import GradientBoostingClassifier clf = GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0).fit(x_train,y_train) from sklearn.linear_model import LogisticRegression logistic = LogisticRegression().fit(x_train,y_train) model = SklearnModel(model=logistic, problem_class='binary_classification', description='This is the first churn model', name="Churn classifier", y_test = y_test, x_test = x_test) model.metrics() model.save()
model_management/Customer Churn with AutoML-Copy1.ipynb
Brett777/Predict-Churn
mit
We can extract the coordinates of neighbour cells and try to fit a model without covariates
%time coords = map(lambda c : (c.centroid.x,c.centroid.y) , neighbour_cells) lon,lat = zip(*coords) X = pd.DataFrame([lon,lat]) Y = pd.DataFrame(y) plt.figure(figsize=(17,11)) plt.scatter(lon,lat,c=Y) data = pd.concat([Y,X.transpose()],axis=1) data.columns = ['Presences','lon','lat'] # Import GPFlow import GPflow as gf k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [0,1] ) model = gf.gpr.GPR(X.transpose().as_matrix(),Y.as_matrix().reshape(len(Y),1).astype(float),k)
notebooks/Sandboxes/TensorFlow/.ipynb_checkpoints/Biospytial Models in GPFlow-checkpoint.ipynb
molgor/spystats
bsd-2-clause
len(X) %time model.optimize()
model.kern.lengthscales = 320.49 model.kern.variance = 0.011 import numpy as np Nn = 500 dsc = data predicted_x = np.linspace(min(dsc.lon),max(dsc.lon),Nn) predicted_y = np.linspace(min(dsc.lat),max(dsc.lat),Nn) Xx, Yy = np.meshgrid(predicted_x,predicted_y) ## Fake richness fake_sp_rich = np.ones(len(Xx.ravel())) predicted_coordinates = np.vstack([ Xx.ravel(), Yy.ravel()]).transpose() #predicted_coordinates = np.vstack([section.SppN, section.newLon,section.newLat]).transpose() %time means,variances = model.predict_y(predicted_coordinates) import cartopy plt.figure(figsize=(17,11)) proj = cartopy.crs.PlateCarree() ax = plt.subplot(111, projection=proj) ax = plt.axes(projection=proj) #algo = new_data.plot(column='SppN',ax=ax,cmap=colormap,edgecolors='') #ax.set_extent([-93, -70, 30, 50]) ax.set_extent([-120, -80, 10, 40]) #ax.set_extent([-95, -70, 25, 45]) #ax.add_feature(cartopy.feature.LAND) ax.add_feature(cartopy.feature.OCEAN) ax.add_feature(cartopy.feature.COASTLINE) ax.add_feature(cartopy.feature.BORDERS, linestyle=':') ax.add_feature(cartopy.feature.LAKES, alpha=0.9) ax.stock_img() #ax.add_geometries(new_data.geometry,crs=cartopy.crs.PlateCarree()) #ax.add_feature(cartopy.feature.RIVERS) mm = ax.pcolormesh(Xx,Yy,means.reshape(Nn,Nn) + (2* np.sqrt(variances).reshape(Nn,Nn)),transform=proj ) #cs = plt.contour(Xx,Yy,np.sqrt(variances).reshape(Nn,Nn),linewidths=2,cmap=plt.cm.Greys_r,linestyles='dotted') cs = plt.contour(Xx,Yy,means.reshape(Nn,Nn) + (2 * np.sqrt(variances).reshape(Nn,Nn)),linewidths=2,colors='k',linestyles='dotted',levels=range(1,20)) plt.clabel(cs, fontsize=16,inline=True,fmt='%1.1f') #ax.scatter(new_data.lon,new_data.lat,edgecolors='',color='white',alpha=0.6) plt.colorbar(mm) plt.title("BEes")
notebooks/Sandboxes/TensorFlow/.ipynb_checkpoints/Biospytial Models in GPFlow-checkpoint.ipynb
molgor/spystats
bsd-2-clause
Hello Networkx and Planarity NetworkX Homepage Planarity's Github Page Pre-generated Graph DB - On Google Drive
# generate random graph G = nx.generators.fast_gnp_random_graph(10, 0.4) # check planarity and draw the graph print("The graph is {0} planar".format("" if planarity.is_planar(G) else "not")) if(planarity.is_planar(G)): planarity.draw(G) nx.draw(G)
old:misc/graph_analysis/check_planarity.ipynb
taesiri/noteobooks
mit
Naรฏve Database creator script
# for the sake of this experiment, I'd used a fix 32x32 grid for adjacency matrix. This is a huge assumption, but this is just a stupid test! def create_db(): # create empty list planar_list = [] non_planar_list = [] # generate random graphs and store their adjacency lists for p in range(1, 95): for _ in range(50000): G = nx.generators.fast_gnp_random_graph(32, p/100.0) if planarity.is_planar(G): planar_list.append(nx.to_numpy_array(G)) else: if(len(planar_list) > len(non_planar_list)): non_planar_list.append(nx.to_numpy_array(G)) # let see how many graph we've got print(len(planar_list)) print(len(non_planar_list)) # save planar graphs as numpy nd-array planar_db = np.array(planar_list) np.save("planar_db.npy", planar_db) # save non planar graphs as numpy nd-array non_planar_db = np.array(non_planar_list) np.save("not_planar_db.npy", non_planar_db)
old:misc/graph_analysis/check_planarity.ipynb
taesiri/noteobooks
mit
Create a new database or load one!
planar_db = np.load("planar_db.npy") non_planar_db = np.load("not_planar_db.npy") np.random.shuffle(planar_db) np.random.shuffle(non_planar_db) all_data = np.append(planar_db, non_planar_db, axis=0) all_labels = np.append(np.ones(len(planar_db)), np.zeros(len(non_planar_db))).astype(int) # shuffling ... permutation = np.random.permutation(all_data.shape[0]) all_data = all_data[permutation] all_labels = all_labels[permutation] training_set = all_data[:250000] training_label = all_labels[:250000] test_set = all_data[250000:] test_label = all_labels[250000:]
old:misc/graph_analysis/check_planarity.ipynb
taesiri/noteobooks
mit
Create a Simple ConvNet using PyTorch
import torch import torchvision import torchvision.transforms as transforms from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 6, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 2) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net # Let's go CUDA! net.cuda() import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) running_loss = 0.0 batch_size = 5000 train_size = training_set.shape[0] batch_in_epoch = int(train_size/batch_size) + 1 for epoch in range(500): for i in range(batch_in_epoch): batch_data = training_set[i*batch_size: (i+1)*batch_size] if(batch_data.shape[0] == 0): continue; batch_data = torch.from_numpy(training_set[i*batch_size: (i+1)*batch_size]).float().unsqueeze_(1) batch_label = torch.from_numpy(training_label[i*batch_size: (i+1)*batch_size]).long() inputs, labels = Variable(batch_data.cuda()), Variable(batch_label.cuda()) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.data[0] if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.8f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') test_tensor = torch.from_numpy(test_set).float().unsqueeze_(1) output = net(Variable(test_tensor.cuda())) _, predicted = torch.max(output.data, 1) predicted = predicted.cpu().numpy() correct = (predicted == batch_test_label).sum() print("prediction accuracy is {0}".format(correct / test_label.size)) wrong_predictions = test_set[predicted != batch_test_label]
old:misc/graph_analysis/check_planarity.ipynb
taesiri/noteobooks
mit
Let's plot some graphs that our ConvNet failed to classify correctly!
samples = np.random.choice(len(wrong_predictions), 3) for i in samples: G = nx.from_numpy_matrix(wrong_predictions[i]) isolated_edges = list(nx.isolates(G)) G.remove_nodes_from(isolated_edges) plt.figure(figsize=(21, 5)) plt.subplot(1, 4, 1) nx.draw(G, pos=nx.circular_layout (G) ) plt.title("Circular Layout") plt.draw() plt.subplot(1, 4, 2) plt.imshow(wrong_predictions[i], cmap='Greys', interpolation='nearest') plt.title("Adjacency Matrix") plt.subplot(1, 4, 3) if(planarity.is_planar(G)): planarity.draw(G) else: plt.text(0.5, 0.5, 'not planar', ha='center', va='center', fontsize=20, color="b") plt.title("Planar Embedding") plt.subplot(1, 4, 4) nx.draw(G, pos=nx.spring_layout (G) ) plt.draw() plt.title("Spring Layout") plt.show()
old:misc/graph_analysis/check_planarity.ipynb
taesiri/noteobooks
mit
Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.
t,W = np.array(brownian(1,1000)) assert isinstance(t, np.ndarray) assert isinstance(W, np.ndarray) assert t.dtype==np.dtype(float) assert W.dtype==np.dtype(float) assert len(t)==len(W)==1000
assignments/assignment03/NumpyEx03.ipynb
Jackporter415/phys202-2015-work
mit
Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.
#Standard numpy functions dW = np.array(np.diff(W)) dW.mean() dW.var(), dW.std() assert len(dW)==len(W)-1 assert dW.dtype==np.dtype(float)
assignments/assignment03/NumpyEx03.ipynb
Jackporter415/phys202-2015-work
mit
Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation: $$ X(t) = X_0 e^{((\mu - \sigma^2/2)t + \sigma W(t))} $$ Use Numpy ufuncs and no loops in your function.
def geo_brownian(t, W, X0, mu, sigma): "Return X(t) for geometric brownian motion with drift mu, volatility sigma.""" #Plug into the equation above X = X0 * np.exp((mu-sigma**2/2)**t + sigma * W) return X assert True # leave this for grading
assignments/assignment03/NumpyEx03.ipynb
Jackporter415/phys202-2015-work
mit
Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above. Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes.
#Standard Plot plt.plot(geo_brownian(t,W,1.0,0.5,0.3)) assert True # leave this for grading
assignments/assignment03/NumpyEx03.ipynb
Jackporter415/phys202-2015-work
mit
Read and tokenize moview review dataset
datafolder = '../data/scaledata/Dennis+Schwartz/' rating_file = os.path.join(datafolder, 'rating.Dennis+Schwartz') review_file = os.path.join(datafolder, 'subj.Dennis+Schwartz') with open(rating_file, 'r') as f: ratings = np.array([float(line.strip()) for line in f.readlines()]) with open(review_file, 'r') as f: reviews = [line for line in f.readlines()] voca, word_ids, word_cnt = get_ids_cnt(reviews) corpus = convert_cnt_to_list(word_ids, word_cnt) n_doc = len(corpus) n_voca = voca.size print('num doc', n_doc, 'num_voca', n_voca) plt.hist(ratings, bins=9) plt.show() print('max rating', np.max(ratings), '\tmin rating', np.min(ratings))
notebook/SupervisedTopicModel_example.ipynb
arongdari/python-topic-model
apache-2.0
Infer topics with SupervisedLDA
n_topic = 50 r_var = 0.01 model = GibbsSupervisedLDA(n_doc, n_voca, n_topic, sigma=r_var) model.fit(corpus, ratings) for ti in model.eta.argsort(): top_words = get_top_words(model.TW, voca, ti, n_words=10) print('Eta', model.eta[ti] ,'Topic', ti ,':\t', ','.join(top_words))
notebook/SupervisedTopicModel_example.ipynb
arongdari/python-topic-model
apache-2.0
Are we underfitting? So far, our validation accuracy has generally been higher that our training accuracy This leads to 2 questions: 1. How is this possible? 2. Is this desirable? Answer 1): Because of dropout. Dropout refers to a layer taht randomly deletes (i.e. sets to 0) each activation in the previous layer with probability p (usually 0.5). This only happends during training, not when calculating the accuracy on the validation set, which is why the validation set can have higher accuracy than the training set. The purpose of dropout is to avoid overfitting. Why? -- by deleting parts of the neural netowkr at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, which allows us to create rich models w/o overfitting. However, it can also result in underfitting if overused. Answer 2): Not desirable. It is likely that we can get better validation set results with less dropout. Removing dropout We start with our fine-tuned cats vs dogs model (with dropout), then fine-tune again all the dense layers, after removing dropout from them. Action Plan: * Re-create and load our modified VGG model with binary dependent * Split the model between the convolutnional (conv) layers and the dense layers * Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch * Create a new model with just the dense layers and dropout p set to 0 * Train this new model using the output of the conv layers as training data
??vgg_ft
deeplearning1/nbs/lesson3_yingchi.ipynb
yingchi/fastai-notes
apache-2.0
py def vgg_ft(out_dim): vgg = Vgg16() vgg.ft(out_fim) model = vgg.model return model
??Vgg16.ft
deeplearning1/nbs/lesson3_yingchi.ipynb
yingchi/fastai-notes
apache-2.0
py def ft(self, num): """ Replace the last layer of the model with a Dense layer of num neurons. Will also lock the weights of all layers except the new layer so that we only learn weights for the last layer in subsequent training. Args: num (int): Number of neurons in the Dense layer Returns: None """ model = self.model model.pop() for layer in model.layers: layer.trainable=False model.add(Dense(num, activation='softmax')) self.compile()
model = vgg_ft(2)
deeplearning1/nbs/lesson3_yingchi.ipynb
yingchi/fastai-notes
apache-2.0
...and load our fine-tuned weights from lesson 2.
model.load_weights(model_path + 'finetune3.h5')
deeplearning1/nbs/lesson3_yingchi.ipynb
yingchi/fastai-notes
apache-2.0
Now, let's train a few iterations w/o dropout. But first, let's pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. Because convolution layers take a lot of time to compute, but Dense layers do not.
model.summary() layers = model.layers # find the lasy convolution layer last_conv_idx = [index for index, layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) fc_layers = layers[last_conv_idx+1:]
deeplearning1/nbs/lesson3_yingchi.ipynb
yingchi/fastai-notes
apache-2.0
Now, we can use the exact same approach to create features as we used when we created the linear model from the imagenet predictions in lesson 2.
batches = get_batches(path+'train', shuffle=False, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) val_classes = val_batches.classes trn_classes = batches.classes val_labels = onehot(val_classes) trn_labels = onehot(trn_classes) # Let's get the outputs of the conv model and save them val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample) trn_features = conv_model.predict_generator(batches, batches.nb_sample) save_array(model_path + 'train_convlayer_features.bc', trn_features) save_array(model_path + 'valid_convlayer_features.bc', val_features) trn_features = load_array(model_path+'train_convlayer_features.bc') val_features = load_array(model_path+'valid_convlayer_features.bc') trn_features.shape # Note that the last conv layer is 512, 14, 14
deeplearning1/nbs/lesson3_yingchi.ipynb
yingchi/fastai-notes
apache-2.0
For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model.
def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly!! opt = RMSprop(lr=0.00001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1, l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model()
deeplearning1/nbs/lesson3_yingchi.ipynb
yingchi/fastai-notes
apache-2.0
Reducing overfitting Now we've gotten a model that overfits. So let's take a few steps to reduce this. Approaches to reduce overfitting Before relying on dropout or other regularization approches to reduce overfitting, try the following techniques first. Because regularization, by definition, biases our model towards simplicity - which we only wnat to do if we know that's necessary. Action Plan: 1. Add more data (Kaggle comp N.A.) 2. Use data augmentation 3. Use architectures that generalize well 4. Add regularization 5. Reduce architecture complexity We assume that you've already collected as much data as you can ,so step (1) isn't relevant. Data augmentation Step 2 - Data augmentation refers to creating additional synthetic data, based on reasonable modifications of your input data. For images, this is likely to involve flipping, rotation, zooming, cropping, panning, minor color changes ... Which types of augmentation are appropriate depends on your data. For instance, for regular photots, you want to use hotizontal flipping, but not vertical flipping. We recommand always using at least some light data aumentation, unless you have so much data that your model will never see the same input twice. Keras comes with very convenient features for automating data augmentation. You simply define what types and maximum amount of augementation you want.
# dim_ordering='tf' uses tensorflow dimension ordering, # which is the same order as matplotlib uses for display. # Therefore when just using for display purposes, this is more convenient gen = image.ImageDataGenerator( rotation_range=10, width_shift_range=0.1,height_shift_range=0.1, shear_range=0.15, zoom_range=0.1, channel_shift_range=10., horizontal_flip=True, dim_ordering='tf')
deeplearning1/nbs/lesson3_yingchi.ipynb
yingchi/fastai-notes
apache-2.0
So to decide which augmentation methods to use, let's take a look at the generated imaged, and use our intuition.
# Create a 'batch' of a single image img = np.expand_dims(ndimage.imread('data/dogscats/test/unknown/7.jpg'),0) # Request the generator to create batches from this image aug_iter = gen.flow(img) # Get 8 examples of these augmentated images aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)] # The original plt.imshow(img[0]) # Augmented plots(aug_imgs, (20, 7), 2) # Ensure that we return to theano dimension ordering K.set_image_dim_ordering('th')
deeplearning1/nbs/lesson3_yingchi.ipynb
yingchi/fastai-notes
apache-2.0
Now, it's time to add a small amount of data augmentation, and see if we can reduce overfitting. The approach will be identical to the method we used to finetune the dense layers in lesson2, except that will use a generator with augmentation configured. Here's how we set up the generator and create batches from it:
gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1,height_shift_range=0.1, zoom_range=0.1,horizontal_flip=True) batches = get_batches(path+'train', gen, batch_size=batch_size) val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size) ??get_batches
deeplearning1/nbs/lesson3_yingchi.ipynb
yingchi/fastai-notes
apache-2.0
py get_batches(dirname, gen=&lt;keras.preprocessing.image.ImageDataGenerator object at 0x7fb1a30544e0&gt;, shuffle=True, batch_size=4, class_mode='categorical', target_size=(224, 224)) When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image (--> the result of the conv layers will be different). Therefore, in order to allow data to flow through all the conv layers and our new dense layers, we attach our fully connected model to the conv model -- after ensuring that the conv layers are not trainable.
fc_model = get_fc_model() for layer in conv_model.layers: layer.trainable = False conv_model.add(fc_model)
deeplearning1/nbs/lesson3_yingchi.ipynb
yingchi/fastai-notes
apache-2.0
Now, we can compile, train, and save our model as usual - note we use fit_generator() since we want to pull random images from the directories on every batch
conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) conv_model.save_weights(model_path + 'aug1.h5') conv_model.load_weights(model_path + 'aug1.h5')
deeplearning1/nbs/lesson3_yingchi.ipynb
yingchi/fastai-notes
apache-2.0
Batch normalization Batch normalization is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activiations so that are of similar scales is called normalization. It is very helpful for fast training - if some activations are very high, they will saturate the model and create very large gradients, causing training to fail; if very low, they will cause training to proceed very slowly. Prior to the developement of batchnorm in 2015, only the inputs to a model could be effectively normalized. However, weights in intermediate layers could easily become poorly scaled, due to problems in weight initialization, or a high learning rate combined with random fluctuations in weights. Batchnorm resolves this problem by normalizing each intermediate layer as well. The details of how it works are not terribly important - the important takeaway is: all modern networks should use batchnorm, or something equivalent. 2 reasons for this: 1. Adding batchnorm to a model can result in 10x or more improvements in training speed 2. Normalization greatly reduces the ability of a small number of outlying inputs to over-influence the training. It also tends to reduce overfitting How it works: At a first step, it normalizes intermediate layers (activations) in the same way as input layers can be normalized. But this would not be enough, since the model would then just push the weights up or down indefinitely to try to undo this normalization. So, there are 2 additional steps: 1. Add 2 more trainable parameters to each layer - one to multiply all activations to set an arbitrary standard deviation, and one to add to all activations to set an arbitraty mean 2. Incorporate both the normalization, and the learnt multiply/add parameters, into the gradient calcultions during backprop. This ensures that the weights don't tend to push very high or very low (since the normalization is included in the gradient calculations, so the updates are aware of the normalization). But it also ensures that if a layer does need to change the overall mean or standard deviation in order to match the output scale, it can do so. Adding batchnorm to the model Why VGG does not use this? Because then VGG was created, batchnorm has not been invented.
conv_layers[-1].output_shape[1:] def get_bn_layers(p): return [ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(4096, activation='relu'), BatchNormalization(), Dropout(p), Dense(1000, activation='softmax') ] def load_fc_weights_from_vgg16bn(model): "Load weights for model from the dense layers of the VGG16BN model." # See imagenet_batchnorm.ipynb for info on how the weights for # Vgg16BN can be generated from the standard Vgg16 weights. from vgg16bn import Vgg16BN vgg16_bn = Vgg16BN() _, fc_layers = split_at(vgg16_bn.model, Convolution2D) copy_weights(fc_layers, model.layers) p = 0.6 bn_model = Sequential(get_bn_layers(0.6)) load_fc_weights_from_vgg16bn(bn_model) def proc_wgts(layer, prev_p, new_p): scal = (1-prev_p)/(1-new_p) return [o*scal for o in layer.get_weights()] for l in bn_model.layers: if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6)) bn_model.pop() for layer in bn_model.layers: layer.trainable=False bn_model.add(Dense(2, activation='softmax')) bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy']) bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels)) bn_model.save_weights(model_path+'bn.h5') bn_model.load_weights(model_path+'bn.h5') bn_layers = get_bn_layers(0.6) bn_layers.pop() bn_layers.append(Dense(2, activation='softmax')) final_model = Sequential(conv_layers) for layer in final_model.layers: layer.trainable = False for layer in bn_layers: final_model.add(layer) for l1, l2 in zip(bn_model.layers, bn_layers): l2.set_weights(l1.get_weights()) final_model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final1.h5') final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4, validation_data=val_batches, nb_val_samples=val_batches.nb_sample) final_model.save_weights(model_path + 'final2.h5') # you can try to set final_model.optimizer.lr=0.001 and try another model
deeplearning1/nbs/lesson3_yingchi.ipynb
yingchi/fastai-notes
apache-2.0
Compute source power using DICS beamformer Compute a Dynamic Imaging of Coherent Sources (DICS) :footcite:GrossEtAl2001 filter from single-trial activity to estimate source power across a frequency band. This example demonstrates how to source localize the event-related synchronization (ERS) of beta band activity in this dataset: somato-dataset
# Author: Marijn van Vliet <w.m.vanvliet@gmail.com> # Roman Goj <roman.goj@gmail.com> # Denis Engemann <denis.engemann@gmail.com> # Stefan Appelhoff <stefan.appelhoff@mailbox.org> # # License: BSD (3-clause) import os.path as op import numpy as np import mne from mne.datasets import somato from mne.time_frequency import csd_morlet from mne.beamformer import make_dics, apply_dics_csd print(__doc__)
0.21/_downloads/d9e2f27df3a137317d331d3be6f3814d/plot_dics_source_power.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can now listen to the resulting audio with the beats marked by beeps. We can also visualize beat estimations.
import IPython IPython.display.Audio(temp_dir.name + '/dubstep_beats.flac') from pylab import plot, show, figure, imshow %matplotlib inline import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = (15, 6) plot(audio) for beat in beats: plt.axvline(x=beat*44100, color='red') plt.xlabel('Time (samples)') plt.title("Audio waveform and the estimated beat positions") show()
src/examples/python/tutorial_rhythm_beatdetection.ipynb
MTG/essentia
agpl-3.0
BPM histogram The BPM value output by RhythmExtactor2013 is the average of all BPM estimates done for each interval between two consecutive beats. Alternatively, we can analyze the distribution of all those intervals using BpmHistogramDescriptors. This is especially useful for music with a varying tempo (which is not the case in our example).
peak1_bpm, peak1_weight, peak1_spread, peak2_bpm, peak2_weight, peak2_spread, histogram = \ es.BpmHistogramDescriptors()(beats_intervals) print("Overall BPM (estimated before): %0.1f" % bpm) print("First histogram peak: %0.1f bpm" % peak1_bpm) print("Second histogram peak: %0.1f bpm" % peak2_bpm) fig, ax = plt.subplots() ax.bar(range(len(histogram)), histogram, width=1) ax.set_xlabel('BPM') ax.set_ylabel('Frequency of occurrence') plt.title("BPM histogram") ax.set_xticks([20 * x + 0.5 for x in range(int(len(histogram) / 20))]) ax.set_xticklabels([str(20 * x) for x in range(int(len(histogram) / 20))]) plt.show()
src/examples/python/tutorial_rhythm_beatdetection.ipynb
MTG/essentia
agpl-3.0
BPM estimation with PercivalBpmEstimator PercivalBpmEstimator is another algorithm for tempo estimation.
# Loading an audio file. audio = es.MonoLoader(filename='../../../test/audio/recorded/dubstep.flac')() # Compute BPM. bpm = es.PercivalBpmEstimator()(audio) print("BPM:", bpm)
src/examples/python/tutorial_rhythm_beatdetection.ipynb
MTG/essentia
agpl-3.0
BPM estimation for audio loops The BPM detection algorithms we considered so far won't necessarily produce the best estimation on short audio inputs, such as audio loops used in music production. Still, it is possible to apply some post-processing heuristics under the assumption that the analyzed audio loop is expected to be well-cut. We have developed the LoopBpmEstimator algorithm specifically for the case of short audio loops. Based on PercivalBpmEstimator, it computes the likelihood of the correctness of BPM predictions using the duration of the audio loop as a reference.
# Our input audio is indeed a well-cut loop. Let's compute the BPM. bpm = es.LoopBpmEstimator()(audio) print("Loop BPM:", bpm)
src/examples/python/tutorial_rhythm_beatdetection.ipynb
MTG/essentia
agpl-3.0
BPM estimation with TempoCNN Essentia supports inference with TensorFlow models for a variety of MIR tasks, in particular tempo estimation, for which we provide the TempoCNN models. The TempoCNN algorithm outputs a global BPM estimation on the entire audio input as well as local estimations on short audio segments (patches) throughout the track. For local estimations, it provides their probabilities that can be used as a confidence measure. To use this algorithm in Python, follow our instructions for using TensorFlow models. To download the model:
!curl -SLO https://essentia.upf.edu/models/tempo/tempocnn/deeptemp-k16-3.pb import essentia.standard as es sr = 11025 audio_11khz = es.MonoLoader(filename='../../../test/audio/recorded/techno_loop.wav', sampleRate=sr)() global_bpm, local_bpm, local_probs = es.TempoCNN(graphFilename='deeptemp-k16-3.pb')(audio_11khz) print('song BPM: {}'.format(global_bpm))
src/examples/python/tutorial_rhythm_beatdetection.ipynb
MTG/essentia
agpl-3.0
We can plot a slice of the waveform on top of a grid with the estimated tempo to get visual verification
import numpy as np duration = 5 # seconds audio_slice = audio_11khz[:sr * duration] plt.plot(audio_slice) markers = np.arange(0, len(audio_slice), sr / (global_bpm / 60)) for marker in markers: plt.axvline(x=marker, color='red') plt.title("Audio waveform on top of a tempo grid") show()
src/examples/python/tutorial_rhythm_beatdetection.ipynb
MTG/essentia
agpl-3.0
TempoCNN operates on audio slices of 12 seconds with an overlap of 6 seconds by default. Additionally, the algorithm outputs the local estimations along with their probabilities. The global value is computed by majority voting by default. However, this method is only recommended when a constant tempo can be assumed.
print('local BPM: {}'.format(local_bpm)) print('local probabilities: {}'.format(local_probs))
src/examples/python/tutorial_rhythm_beatdetection.ipynb
MTG/essentia
agpl-3.0
line style or marker character | description -----------|------------------------- '-' | solid line style '--' | dashed line style '-.' | dash-dot line style ':' | dotted line style '.' | point marker ',' | pixel marker 'o' | circle marker 'v' | triangle_down marker '^' | triangle_up marker '<' | triangle_left marker '>' | triangle_right marker '1' | tri_down marker '2' | tri_up marker '3' | tri_left marker '4' | tri_right marker 's' | square marker 'p' | pentagon marker '*' | star marker 'h' | hexagon1 marker 'H' | hexagon2 marker '+' | plus marker 'x' | x marker 'D' | diamond marker 'd' | thin_diamond marker '&#124;' | vline marker (pipe) '_' | hline marker
# Create a figure of size 8x6 inches, 80 dots per inch plt.figure(figsize=(8, 6), dpi=80) # Create a new subplot from a grid of 1x1 plt.subplot(1, 1, 1) X = np.linspace(-np.pi, np.pi, 256, endpoint=True) C, S = np.cos(X), np.sin(X) # Plot cosine with a blue continuous line of width 1 (pixels) plt.plot(X, C, color="blue", linewidth=1.0, linestyle="--") # Plot sine with a green continuous line of width 1 (pixels) plt.plot(X, S, color="green", linewidth=1.0, marker="p") # Set x limits plt.xlim(-4.0, 4.0) # Set x ticks plt.xticks(np.linspace(-4, 4, 9, endpoint=True)) # Set y limits plt.ylim(-1.0, 1.0) # Set y ticks plt.yticks(np.linspace(-1, 1, 5, endpoint=True)) # Save figure using 72 dots per inch # plt.savefig("exercice_2.png", dpi=72) # Show result on screen plt.show()
scikit-learn/MatPlotLib_01.ipynb
atulsingh0/MachineLearning
gpl-3.0
Changing colors and line widths Color Codes character| color ----------|---------- 'b' | blue 'g' | green 'r' | red 'c' | cyan 'm' | magenta 'y' | yellow 'k' | black 'w' | white
# Create a figure of size 8x6 inches, 80 dots per inch plt.figure(figsize=(6, 4), dpi=20) # Plot cosine with a blue continuous line of width 1 (pixels) plt.plot(X, C, color="red", linewidth=1.0, linestyle="-") # Plot sine with a green continuous line of width 1 (pixels) plt.plot(X, S, color="green", linewidth=1.0, linestyle="-")
scikit-learn/MatPlotLib_01.ipynb
atulsingh0/MachineLearning
gpl-3.0
Setting limits
plt.xlim(X.min() * 1.1, X.max() * 1.1) plt.ylim(C.min() * 1.1, C.max() * 1.1) # Plot cosine with a blue continuous line of width 1 (pixels) plt.plot(X, C, color="red", linewidth=1.0, linestyle="-") # Plot sine with a green continuous line of width 1 (pixels) plt.plot(X, S, color="green", linewidth=1.0, linestyle="-")
scikit-learn/MatPlotLib_01.ipynb
atulsingh0/MachineLearning
gpl-3.0
Setting ticks
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi]) plt.yticks([-1, 0, +1]) # Plot cosine with a blue continuous line of width 1 (pixels) plt.plot(X, C, color="red", linewidth=1.0, linestyle="-") # Plot sine with a green continuous line of width 1 (pixels) plt.plot(X, S, color="blue", linewidth=1.0, linestyle="-")
scikit-learn/MatPlotLib_01.ipynb
atulsingh0/MachineLearning
gpl-3.0
Setting tick labels xticks and yticks accept two list as below - [ticks values], [ticks name]
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi],[r'$-\pi$', r'$-\pi/2$', r'$0$', r'$+\pi/2$', r'$+\pi$']) plt.yticks([-1, 0, +1],[r'$-1$', r'$0$', r'$+1$']) # Plot cosine with a red continuous line of width 1 (pixels) plt.plot(X, C, color="red", linewidth=1.0, linestyle="-") # Plot sine with a blue continuous line of width 1 (pixels) plt.plot(X, S, color="blue", linewidth=1.0, linestyle="-")
scikit-learn/MatPlotLib_01.ipynb
atulsingh0/MachineLearning
gpl-3.0
Moving spines
ax = plt.gca() # gca stands for 'get current axis' ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') ax.xaxis.set_ticks_position('bottom') ax.spines['bottom'].set_position(('data',0)) ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data',0)) # Plot cosine with a blue continuous line of width 1 (pixels) plt.plot(X, C, color="blue", linewidth=1.0, linestyle="-") # Plot sine with a green continuous line of width 1 (pixels) plt.plot(X, S, color="green", linewidth=1.0, linestyle="-")
scikit-learn/MatPlotLib_01.ipynb
atulsingh0/MachineLearning
gpl-3.0
Adding a legend
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-", label="cosine") plt.plot(X, S, color="red", linewidth=2.5, linestyle="-", label="sine") plt.legend(loc='upper left')
scikit-learn/MatPlotLib_01.ipynb
atulsingh0/MachineLearning
gpl-3.0
Annotate some points
t = 2 * np.pi / 3 plt.plot([t, t], [0, np.cos(t)], color='blue', linewidth=2.5, linestyle="--") plt.scatter([t, ], [np.cos(t), ], 50, color='blue') plt.annotate(r'$sin(\frac{2\pi}{3})=\frac{\sqrt{3}}{2}$', xy=(t, np.sin(t)), xycoords='data', xytext=(+10, +30), textcoords='offset points', fontsize=16, arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2")) plt.plot([t, t],[0, np.sin(t)], color='red', linewidth=2.5, linestyle="--") plt.scatter([t, ],[np.sin(t), ], 50, color='red') plt.annotate(r'$cos(\frac{2\pi}{3})=-\frac{1}{2}$', xy=(t, np.cos(t)), xycoords='data', xytext=(-90, -50), textcoords='offset points', fontsize=16, arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2")) plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-", label="cosine") plt.plot(X, S, color="red", linewidth=2.5, linestyle="-", label="sine")
scikit-learn/MatPlotLib_01.ipynb
atulsingh0/MachineLearning
gpl-3.0
Devil is in the details
for label in ax.get_xticklabels() + ax.get_yticklabels(): label.set_fontsize(16) label.set_bbox(dict(facecolor='white', edgecolor='None', alpha=0.65)) t = 2 * np.pi / 3 plt.plot([t, t], [0, np.cos(t)], color='blue', linewidth=2.5, linestyle="--") plt.scatter([t, ], [np.cos(t), ], 50, color='blue') plt.annotate(r'$sin(\frac{2\pi}{3})=\frac{\sqrt{3}}{2}$', xy=(t, np.sin(t)), xycoords='data', xytext=(+10, +30), textcoords='offset points', fontsize=16, arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2")) plt.plot([t, t],[0, np.sin(t)], color='red', linewidth=2.5, linestyle="--") plt.scatter([t, ],[np.sin(t), ], 50, color='red') plt.annotate(r'$cos(\frac{2\pi}{3})=-\frac{1}{2}$', xy=(t, np.cos(t)), xycoords='data', xytext=(-90, -50), textcoords='offset points', fontsize=16, arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2")) plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-", label="cosine") plt.plot(X, S, color="red", linewidth=2.5, linestyle="-", label="sine") plt.legend(loc='upper left')
scikit-learn/MatPlotLib_01.ipynb
atulsingh0/MachineLearning
gpl-3.0
.. _tut_sensors_time_frequency: Frequency and time-frequency sensors analysis The objective is to show you how to explore the spectral content of your data (frequency and time-frequency). Here we'll work on Epochs. We will use the somatosensory dataset that contains so called event related synchronizations (ERS) / desynchronizations (ERD) in the beta band.
import numpy as np import matplotlib.pyplot as plt import mne from mne.time_frequency import tfr_morlet, psd_multitaper from mne.datasets import somato
0.12/_downloads/plot_sensors_time_frequency.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Inspect power .. note:: The generated figures are interactive. In the topo you can click on an image to visualize the data for one censor. You can also select a portion in the time-frequency plane to obtain a topomap for a certain time-frequency region.
power.plot_topo(baseline=(-0.5, 0), mode='logratio', title='Average power') power.plot([82], baseline=(-0.5, 0), mode='logratio') fig, axis = plt.subplots(1, 2, figsize=(7, 4)) power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=8, fmax=12, baseline=(-0.5, 0), mode='logratio', axes=axis[0], title='Alpha', vmax=0.45, show=False) power.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=13, fmax=25, baseline=(-0.5, 0), mode='logratio', axes=axis[1], title='Beta', vmax=0.45, show=False) mne.viz.tight_layout() plt.show()
0.12/_downloads/plot_sensors_time_frequency.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The setup required to repeat these operations is explained in the introduction notebook. In case graphs do not appear in this page you can refer to the static version. The next commands may be needed or not depending on your setup (i.e. if you use my docker setup):
import os from pprint import pprint import pandas import sqlalchemy # your postgres server IP IP = 'localhost' def sql(query, **kwargs): """helper function for SQL queries using the %(...) syntax Parameters defined globally are replaced implicitely""" params = globals().copy() params.update(kwargs) # define DB connection parameters if needed PGHOST = os.environ.get('PGHOST', IP) PGDATABASE = os.environ.get('PGDATABASE', 'musicbrainz') PGUSER = os.environ.get('PGUSER', 'musicbrainz') PGPASSWORD = os.environ.get('PGPASSWORD', 'musicbrainz') engine = sqlalchemy.create_engine( 'postgresql+psycopg2://%(PGUSER)s:%(PGPASSWORD)s@%(PGHOST)s/%(PGDATABASE)s' % locals(), isolation_level='READ UNCOMMITTED') return pandas.read_sql(query, engine, params=params) # helper functions to generate an HTML link to an entity MusicBrainz URL def _mb_link(type, mbid): return '<a href="https://musicbrainz.org/%(type)s/%(mbid)s">%(mbid)s</a>' % locals() mb_artist_link = lambda mbid: _mb_link('artist', mbid)
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
Extraction of band data from the database Now we can extract the information for the band I want. The SQL query will look for: band name artists linked to this band through the "member of" relationship instrument/vocal role of this relationship Let's start with some band you probably already know:
band_name = 'The Beatles'
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
The SQL query is a bit complicated because it uses a lot of different tables, I won't go into details. We store the result in a data structure called a PanDas DataFrame (df).
df = sql(""" SELECT b.name AS band, m.name AS member, m.gid AS mbid, lat.name AS role, to_date(to_char(l.begin_date_year, '9999') || '0101', 'YYYYMMDD') AS start, to_date(to_char(l.end_date_year, '9999') || '0101', 'YYYYMMDD') AS end FROM artist AS b JOIN l_artist_artist AS laa ON laa.entity1 = b.id JOIN artist AS m ON laa.entity0 = m.id JOIN link AS l ON l.id = laa.link JOIN link_attribute AS la ON la.link = l.id JOIN link_attribute_type AS lat ON la.attribute_type = lat.id JOIN link_type AS lt ON l.link_type = lt.id WHERE lt.name = 'member of band' AND b.name = %(band_name)s AND lat.name != 'original'; """) df
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
The data is here, we just want to set a start date for Lennon's roles since it is not in the database.
import datetime df['start'] = df['start'].fillna(datetime.date(1957, 1, 1)) df['mbid'] = df['mbid'].astype(str) # otherwise PanDas uses the UUID data type which will cause problems later. df
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
Display a timeline with timesheet-advanced The timesheet-advanced package requires the input data for the timeline to be inserted slightly differently from what we have in our dataframe df. Let us first copy our data in a new variable ts and simplify the dates to years.
ts = df.copy() ts['start'] = ts['start'].apply(lambda date: date.year).astype(str) ts['end'] = ts['end'].apply(lambda date: date.year).astype(str)
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
We need a 'label' field (we'll choose the band member name + instrument) and we need a 'type' which is a color. We choose colors to represent all possible roles (vocals, guitar, drums....)
ts['label'] = df['member'] + ' (' + df['role'] + ')' ts colors = dict(zip(sorted(set(ts['role'])), ['red', 'blue', 'yellow', 'green'])) print('Correspondance between colors and roles: {}'.format(colors)) ts['type'] = ts['role'].apply(lambda role: colors[role]) ts
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
We can also add a 'link' columns containing URLs to the MusicBrainz website:
ts['link'] = 'https://musicbrainz.org/artist/' + ts['mbid'] ts.drop('mbid', axis=1, inplace=True) ts
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
The last preparation step is to transform this Python data structure into a Javascript one that the timesheet library can read. We're going to use the fact that a Python list and a Javascript array are very close (we could also use JSON format to transform our data into smething JavaScript-compatible).
bubbles = [ts.loc[i].to_dict() for i in range(len(ts))] print('First bubble:') pprint(bubbles[0])
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
Perfect, bubbles contains our data. Time to do some javascript. The Jupyter notebook can display javascript code in an output cell by using the element.append magic. To display the timeline inside this notebook we need to load the JS/CSS source of the timesheet-advanced package...
from IPython.display import HTML HTML(""" <link rel="stylesheet" type="text/css" href="https://cdn.rawgit.com/ntucakovic/timesheet-advanced.js/ea3ee1ad/dist/timesheet.min.css" /> <script type="text/javascript" src="https://cdn.rawgit.com/ntucakovic/timesheet-advanced.js/ea3ee1ad/dist/timesheet-advanced.min.js"></script> """)
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
... and to create an output container for our timeline. This cell be filled when the next cell code (new Timesheet(...)) will be executed.
%%javascript // this must be executed before the "from IPython.display import Javascript" block element.append('<div id="timesheet-container" style="width: 100%;height: 100%;"></div>');
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
Last step: we call the Timesheet javascript command using the CSS/JS libraries loaded above, our input data (bubbles), the cell where we want our graph, and the timeline limit (min and max date). Executing the next cell will fill the output cell just above this block automatically.
from IPython.display import Javascript Javascript(""" var bubbles = %s; new Timesheet(bubbles, { container: 'timesheet-container', type: 'parallel', timesheetYearMin: %s, timesheetYearMax: %s, theme: 'light' }); """ % (bubbles, ts['start'].min(), ts['end'].max()))
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
We have our timeline now! As you can see, the same color is used for the same role consistently. The items on the timeline are clickable links bringing you to the artist page on MusicBrainz. If you can't see the timeline above you can find a static version on github.io Display a timeline with vis.js We can try to display the same data with another JavaScript library, vis.js. Again we will need to prepare the data.
v = df.copy() v['start'] = v['start'].apply(lambda date: date.isoformat()) v['end'] = v['end'].apply(lambda date: date.isoformat()) v.drop('mbid', axis=1, inplace=True) v['type'] = v['role'].apply(lambda role: colors[role]) v['label'] = v['member'] + ' (' + v['role'] + ')' v
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
This time we are not going to inject the data inside a javascript string executed by the notebook, we are going to attach the data as JSON to the webpage itself (window) so that vis.js can find it.
# Transform into JSON data = [{'start': line.start, 'end': line.end, 'content': line.label, 'className': line.type } for _, line in ts.iterrows()] # Send to Javascript import json from IPython.display import Javascript Javascript("""window.bandData={};""".format(json.dumps(data, indent=4)))
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
We need to load the default CSS (from cdnjs.cloudflare.com) and add our custom CSS on top:
%%html <link rel="stylesheet" type="text/css" href="https://cdnjs.cloudflare.com/ajax/libs/vis/4.20.1/vis-timeline-graph2d.min.css" /> %%html <style type="text/css"> /* custom styles for individual items, load this after vis.css/vis-timeline-graph2d.min.css */ .vis-item.red { background-color: red; } .vis-item.blue { background-color: blue; } .vis-item.yellow { background-color: yellow; } .vis-item.green { background-color: greenyellow; } .vis-item.vis-selected { background-color: white; border-color: black; color: black; box-shadow: 0 0 10px gray; } </style>
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
In order to load the JS library itself, we can use the require mechanism inside the notebook:
%%javascript element.append('<div id="vis-container" style="width: 100%;height: 100%;"></div>'); requirejs.config({ paths: { vis: '//cdnjs.cloudflare.com/ajax/libs/vis/4.20.1/vis' } }); require(['vis'], function(vis){ var data = new vis.DataSet(window.bandData); var options = { editable: false }; // create the timeline var container = document.getElementById('vis-container'); var timeline = new vis.Timeline(container, data, options); })
1-timelines.ipynb
loujine/musicbrainz-dataviz
mit
Keras ์˜ˆ์ œ์˜ ๊ฐ€์ค‘์น˜ ํด๋Ÿฌ์Šคํ„ฐ๋ง <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/clustering/clustering_example"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org์—์„œ ๋ณด๊ธฐ</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/model_optimization/guide/clustering/clustering_example.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/model_optimization/guide/clustering/clustering_example.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/model_optimization/guide/clustering/clustering_example.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">๋…ธํŠธ๋ถ ๋‹ค์šด๋กœ๋“œํ•˜๊ธฐ</a></td> </table> ๊ฐœ์š” TensorFlow ๋ชจ๋ธ ์ตœ์ ํ™” ๋„๊ตฌ ํ‚คํŠธ์˜ ์ผ๋ถ€์ธ ๊ฐ€์ค‘์น˜ ํด๋Ÿฌ์Šคํ„ฐ๋ง์— ๋Œ€ํ•œ ์—”๋“œ ํˆฌ ์—”๋“œ ์˜ˆ์ œ๋ฅผ ์†Œ๊ฐœํ•ฉ๋‹ˆ๋‹ค. ๊ธฐํƒ€ ํŽ˜์ด์ง€ ๊ฐ€์ค‘์น˜ ํด๋Ÿฌ์Šคํ„ฐ๋ง์— ๋Œ€ํ•œ ์†Œ๊ฐœ์™€ ์ด๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•˜๋Š”์ง€ ์—ฌ๋ถ€(์ง€์› ๋‚ด์šฉ ํฌํ•จ)๋ฅผ ๊ฒฐ์ •ํ•˜๋ ค๋ฉด ๊ฐœ์š” ํŽ˜์ด์ง€๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. 16๊ฐœ์˜ ํด๋Ÿฌ์Šคํ„ฐ๋กœ ๋ชจ๋ธ์„ ์™„์ „ํ•˜๊ฒŒ ํด๋Ÿฌ์Šคํ„ฐ๋งํ•˜๋Š” ๋“ฑ ํ•ด๋‹น ์‚ฌ์šฉ ์‚ฌ๋ก€์— ํ•„์š”ํ•œ API๋ฅผ ๋น ๋ฅด๊ฒŒ ์ฐพ์œผ๋ ค๋ฉด ์ข…ํ•ฉ ๊ฐ€์ด๋“œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๋‚ด์šฉ ์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ๋‹ค์Œ์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค. MNIST ๋ฐ์ดํ„ฐ์„ธํŠธ๋ฅผ ์œ„ํ•œ tf.keras ๋ชจ๋ธ์„ ์ฒ˜์Œ๋ถ€ํ„ฐ ํ›ˆ๋ จํ•ฉ๋‹ˆ๋‹ค. ๊ฐ€์ค‘์น˜ ํด๋Ÿฌ์Šคํ„ฐ๋ง API๋ฅผ ์ ์šฉํ•˜์—ฌ ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ์ •ํ™•์„ฑ์„ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ํด๋Ÿฌ์Šคํ„ฐ๋ง์œผ๋กœ๋ถ€ํ„ฐ 6๋ฐฐ ๋” ์ž‘์€ TF ๋ฐ TFLite ๋ชจ๋ธ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ๊ฐ€์ค‘์น˜ ํด๋Ÿฌ์Šคํ„ฐ๋ง๊ณผ ํ›ˆ๋ จ ํ›„ ์–‘์žํ™”๋ฅผ ๊ฒฐํ•ฉํ•˜์—ฌ 8๋ฐฐ ๋” ์ž‘์€ TFLite ๋ชจ๋ธ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. TF์—์„œ TFLite๋กœ ์ •ํ™•์„ฑ์ด ์ง€์†๋˜๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค. ์„ค์ • ์ด Jupyter ๋…ธํŠธ๋ถ์€ ๋กœ์ปฌ virtualenv ๋˜๋Š” colab์—์„œ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ข…์†์„ฑ ์„ค์ •์— ๋Œ€ํ•œ ์ž์„ธํ•œ ๋‚ด์šฉ์€ ์„ค์น˜ ๊ฐ€์ด๋“œ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”.
! pip install -q tensorflow-model-optimization import tensorflow as tf from tensorflow import keras import numpy as np import tempfile import zipfile import os
site/ko/model_optimization/guide/clustering/clustering_example.ipynb
tensorflow/docs-l10n
apache-2.0
ํด๋Ÿฌ์Šคํ„ฐ๋ง์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  MNIST์šฉ tf.keras ๋ชจ๋ธ ํ›ˆ๋ จํ•˜๊ธฐ
# Load MNIST dataset mnist = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_images / 255.0 test_images = test_images / 255.0 # Define the model architecture. model = keras.Sequential([ keras.layers.InputLayer(input_shape=(28, 28)), keras.layers.Reshape(target_shape=(28, 28, 1)), keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu), keras.layers.MaxPooling2D(pool_size=(2, 2)), keras.layers.Flatten(), keras.layers.Dense(10) ]) # Train the digit classification model model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit( train_images, train_labels, validation_split=0.1, epochs=10 )
site/ko/model_optimization/guide/clustering/clustering_example.ipynb
tensorflow/docs-l10n
apache-2.0
๊ธฐ์ค€ ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•˜๊ณ  ๋‚˜์ค‘์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ์ €์žฅํ•˜๊ธฐ
_, baseline_model_accuracy = model.evaluate( test_images, test_labels, verbose=0) print('Baseline test accuracy:', baseline_model_accuracy) _, keras_file = tempfile.mkstemp('.h5') print('Saving model to: ', keras_file) tf.keras.models.save_model(model, keras_file, include_optimizer=False)
site/ko/model_optimization/guide/clustering/clustering_example.ipynb
tensorflow/docs-l10n
apache-2.0
ํด๋Ÿฌ์Šคํ„ฐ๋ง์„ ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ๋ชจ๋ธ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ธฐ ์‚ฌ์ „ ํ›ˆ๋ จ๋œ ์ „์ฒด ๋ชจ๋ธ์— cluster_weights() API๋ฅผ ์ ์šฉํ•˜์—ฌ ์••์ถ• ํ›„ ์ ์ ˆํ•œ ์ •ํ™•์„ฑ์„ ์œ ์ง€ํ•˜๋ฉด์„œ ๋ชจ๋ธ ํฌ๊ธฐ๊ฐ€ ์ค„์–ด๋“œ๋Š” ํšจ๊ณผ๋ฅผ ์ž…์ฆํ•ฉ๋‹ˆ๋‹ค. ํ•ด๋‹น ์‚ฌ์šฉ ์‚ฌ๋ก€์—์„œ ์ •ํ™•์„ฑ๊ณผ ์••์ถ•๋ฅ ์˜ ๊ท ํ˜•์„ ๊ฐ€์žฅ ์ž˜ ์œ ์ง€ํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ํฌ๊ด„์  ๊ฐ€์ด๋“œ์˜ ๋ ˆ์ด์–ด๋ณ„ ์˜ˆ๋ฅผ ์ฐธ์กฐํ•˜์„ธ์š”. ๋ชจ๋ธ ์ •์˜ ๋ฐ ํด๋Ÿฌ์Šคํ„ฐ๋ง API ์ ์šฉํ•˜๊ธฐ ๋ชจ๋ธ์„ ํด๋Ÿฌ์Šคํ„ฐ๋ง API๋กœ ์ „๋‹ฌํ•˜๊ธฐ ์ „์— ๋ชจ๋ธ์ด ํ›ˆ๋ จ๋˜์—ˆ๊ณ  ์ˆ˜์šฉ ๊ฐ€๋Šฅํ•œ ์ •ํ™•์„ฑ์„ ๋ณด์ด๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค.
import tensorflow_model_optimization as tfmot cluster_weights = tfmot.clustering.keras.cluster_weights CentroidInitialization = tfmot.clustering.keras.CentroidInitialization clustering_params = { 'number_of_clusters': 16, 'cluster_centroids_init': CentroidInitialization.LINEAR } # Cluster a whole model clustered_model = cluster_weights(model, **clustering_params) # Use smaller learning rate for fine-tuning clustered model opt = tf.keras.optimizers.Adam(learning_rate=1e-5) clustered_model.compile( loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer=opt, metrics=['accuracy']) clustered_model.summary()
site/ko/model_optimization/guide/clustering/clustering_example.ipynb
tensorflow/docs-l10n
apache-2.0
๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ณ  ๊ธฐ์ค€ ๋Œ€๋น„ ์ •ํ™•์„ฑ ํ‰๊ฐ€ํ•˜๊ธฐ ํ•˜๋‚˜์˜ epoch ๋™์•ˆ ํด๋Ÿฌ์Šคํ„ฐ๋ง์ด ์žˆ๋Š” ๋ชจ๋ธ์„ ๋ฏธ์„ธ ์กฐ์ •ํ•ฉ๋‹ˆ๋‹ค.
# Fine-tune model clustered_model.fit( train_images, train_labels, batch_size=500, epochs=1, validation_split=0.1)
site/ko/model_optimization/guide/clustering/clustering_example.ipynb
tensorflow/docs-l10n
apache-2.0
์ด ์˜ˆ์˜ ๊ฒฝ์šฐ, ๊ธฐ์ค€๊ณผ ๋น„๊ตํ•˜์—ฌ ํด๋Ÿฌ์Šคํ„ฐ๋ง ํ›„ ํ…Œ์ŠคํŠธ ์ •ํ™•์„ฑ์˜ ์†์‹ค์ด ๋ฏธ๋ฏธํ•ฉ๋‹ˆ๋‹ค.
_, clustered_model_accuracy = clustered_model.evaluate( test_images, test_labels, verbose=0) print('Baseline test accuracy:', baseline_model_accuracy) print('Clustered test accuracy:', clustered_model_accuracy)
site/ko/model_optimization/guide/clustering/clustering_example.ipynb
tensorflow/docs-l10n
apache-2.0
ํด๋Ÿฌ์Šคํ„ฐ๋ง์œผ๋กœ๋ถ€ํ„ฐ 6๋ฐฐ ๋” ์ž‘์€ ๋ชจ๋ธ ๋งŒ๋“ค๊ธฐ ํด๋Ÿฌ์Šคํ„ฐ๋ง์˜ ์••์ถ• ์ด์ ์„ ํ™•์ธํ•˜๋ ค๋ฉด strip_clustering๊ณผ ํ‘œ์ค€ ์••์ถ• ์•Œ๊ณ ๋ฆฌ์ฆ˜(์˜ˆ: gzip ์ด์šฉ) ์ ์šฉ์ด ๋ชจ๋‘ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋จผ์ €, TensorFlow๋ฅผ ์œ„ํ•œ ์••์ถ• ๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ, strip_clustering์€ ํ›ˆ๋ จ ์ค‘์—๋งŒ ํด๋Ÿฌ์Šคํ„ฐ๋ง์— ํ•„์š”ํ•œ ๋ชจ๋“  ๋ณ€์ˆ˜(์˜ˆ: ํด๋Ÿฌ์Šคํ„ฐ ์ค‘์‹ฌ๊ณผ ์ธ๋ฑ์Šค๋ฅผ ์ €์žฅํ•˜๊ธฐ ์œ„ํ•œ tf.Variable)๋ฅผ ์ œ๊ฑฐํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ณ€์ˆ˜๋ฅผ ์ œ๊ฑฐํ•˜์ง€ ์•Š์œผ๋ฉด ์ถ”๋ก  ์ค‘์— ๋ชจ๋ธ ํฌ๊ธฐ๊ฐ€ ์ฆ๊ฐ€ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.
final_model = tfmot.clustering.keras.strip_clustering(clustered_model) _, clustered_keras_file = tempfile.mkstemp('.h5') print('Saving clustered model to: ', clustered_keras_file) tf.keras.models.save_model(final_model, clustered_keras_file, include_optimizer=False)
site/ko/model_optimization/guide/clustering/clustering_example.ipynb
tensorflow/docs-l10n
apache-2.0
๊ทธ๋Ÿฐ ๋‹ค์Œ, TFLite๋ฅผ ์œ„ํ•œ ์••์ถ• ๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ์„ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ํด๋Ÿฌ์Šคํ„ฐ๋ง๋œ ๋ชจ๋ธ์„ ๋Œ€์ƒ ๋ฐฑ์—”๋“œ์—์„œ ์‹คํ–‰ ๊ฐ€๋Šฅํ•œ ํ˜•์‹์œผ๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. TensorFlow Lite๋Š” ๋ชจ๋ฐ”์ผ ๊ธฐ๊ธฐ์— ๋ฐฐํฌํ•˜๋Š” ๋ฐ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์˜ˆ์ž…๋‹ˆ๋‹ค.
clustered_tflite_file = '/tmp/clustered_mnist.tflite' converter = tf.lite.TFLiteConverter.from_keras_model(final_model) tflite_clustered_model = converter.convert() with open(clustered_tflite_file, 'wb') as f: f.write(tflite_clustered_model) print('Saved clustered TFLite model to:', clustered_tflite_file)
site/ko/model_optimization/guide/clustering/clustering_example.ipynb
tensorflow/docs-l10n
apache-2.0
์‹ค์ œ๋กœ gzip์„ ํ†ตํ•ด ๋ชจ๋ธ์„ ์••์ถ•ํ•˜๋Š” ๋„์šฐ๋ฏธ ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•˜๊ณ  ์••์ถ•๋œ ํฌ๊ธฐ๋ฅผ ์ธก์ •ํ•ฉ๋‹ˆ๋‹ค.
def get_gzipped_model_size(file): # It returns the size of the gzipped model in bytes. import os import zipfile _, zipped_file = tempfile.mkstemp('.zip') with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f: f.write(file) return os.path.getsize(zipped_file)
site/ko/model_optimization/guide/clustering/clustering_example.ipynb
tensorflow/docs-l10n
apache-2.0
ํด๋Ÿฌ์Šคํ„ฐ๋ง์œผ๋กœ๋ถ€ํ„ฐ ๋ชจ๋ธ์ด 6๋ฐฐ ๋” ์ž‘์•„์ง„ ๊ฒƒ์„ ํ™•์ธํ•˜์„ธ์š”.
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file))) print("Size of gzipped clustered Keras model: %.2f bytes" % (get_gzipped_model_size(clustered_keras_file))) print("Size of gzipped clustered TFlite model: %.2f bytes" % (get_gzipped_model_size(clustered_tflite_file)))
site/ko/model_optimization/guide/clustering/clustering_example.ipynb
tensorflow/docs-l10n
apache-2.0
๊ฐ€์ค‘์น˜ ํด๋Ÿฌ์Šคํ„ฐ๋ง๊ณผ ํ›ˆ๋ จ ํ›„ ์–‘์žํ™”๋ฅผ ๊ฒฐํ•ฉํ•˜์—ฌ 8๋ฐฐ ๋” ์ž‘์€ TFLite ๋ชจ๋ธ ๋งŒ๋“ค๊ธฐ ์ถ”๊ฐ€์ ์ธ ์ด์ ์„ ์–ป๊ธฐ ์œ„ํ•ด ํด๋Ÿฌ์Šคํ„ฐ๋งํ•œ ๋ชจ๋ธ์— ํ›ˆ๋ จ ํ›„ ์–‘์žํ™”๋ฅผ ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
converter = tf.lite.TFLiteConverter.from_keras_model(final_model) converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_quant_model = converter.convert() _, quantized_and_clustered_tflite_file = tempfile.mkstemp('.tflite') with open(quantized_and_clustered_tflite_file, 'wb') as f: f.write(tflite_quant_model) print('Saved quantized and clustered TFLite model to:', quantized_and_clustered_tflite_file) print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file))) print("Size of gzipped clustered and quantized TFlite model: %.2f bytes" % (get_gzipped_model_size(quantized_and_clustered_tflite_file)))
site/ko/model_optimization/guide/clustering/clustering_example.ipynb
tensorflow/docs-l10n
apache-2.0
TF์—์„œ TFLite๋กœ ์ •ํ™•์„ฑ์ด ์ง€์†๋˜๋Š”์ง€ ํ™•์ธํ•˜๊ธฐ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ์„ธํŠธ์—์„œ TFLite ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•˜๋Š” ๋„์šฐ๋ฏธ ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค.
def eval_model(interpreter): input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Run predictions on every image in the "test" dataset. prediction_digits = [] for i, test_image in enumerate(test_images): if i % 1000 == 0: print('Evaluated on {n} results so far.'.format(n=i)) # Pre-processing: add batch dimension and convert to float32 to match with # the model's input data format. test_image = np.expand_dims(test_image, axis=0).astype(np.float32) interpreter.set_tensor(input_index, test_image) # Run inference. interpreter.invoke() # Post-processing: remove batch dimension and find the digit with highest # probability. output = interpreter.tensor(output_index) digit = np.argmax(output()[0]) prediction_digits.append(digit) print('\n') # Compare prediction results with ground truth labels to calculate accuracy. prediction_digits = np.array(prediction_digits) accuracy = (prediction_digits == test_labels).mean() return accuracy
site/ko/model_optimization/guide/clustering/clustering_example.ipynb
tensorflow/docs-l10n
apache-2.0
ํด๋Ÿฌ์Šคํ„ฐ๋ง๋˜๊ณ  ์–‘์žํ™”๋œ ๋ชจ๋ธ์„ ํ‰๊ฐ€ํ•œ ๋‹ค์Œ, TensorFlow์˜ ์ •ํ™•์„ฑ์ด TFLite ๋ฐฑ์—”๋“œ๊นŒ์ง€ ์œ ์ง€๋˜๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค.
interpreter = tf.lite.Interpreter(model_content=tflite_quant_model) interpreter.allocate_tensors() test_accuracy = eval_model(interpreter) print('Clustered and quantized TFLite test_accuracy:', test_accuracy) print('Clustered TF test accuracy:', clustered_model_accuracy)
site/ko/model_optimization/guide/clustering/clustering_example.ipynb
tensorflow/docs-l10n
apache-2.0
8124 instances in my datafile is confirmed. <br> Below is the list of features and their coding copied from the database description file. 1. cap-shape: bell=b,conical=c,convex=x,flat=f,knobbed=k,sunken=s <br> 2. cap-surface: fibrous=f,grooves=g,scaly=y,smooth=s <br> 3. cap-color: brown=n,buff=b,cinnamon=c,gray=g,green=r,pink=p,purple=u,red=e,white=w,yellow=y <br> 4. bruises?: bruises=t,no=f <br> 5. odor: almond=a,anise=l,creosote=c,fishy=y,foul=f,musty=m,none=n,pungent=p,spicy=s <br> 6. gill-attachment: attached=a,descending=d,free=f,notched=n <br> 7. gill-spacing: close=c,crowded=w,distant=d <br> 8. gill-size: broad=b,narrow=n <br> 9. gill-color: black=k,brown=n,buff=b,chocolate=h,gray=g,green=r,orange=o,pink=p,purple=u,red=e, white=w,yellow=y <br> 10. stalk-shape: enlarging=e,tapering=t <br> 11. stalk-root: bulbous=b,club=c,cup=u,equal=e,rhizomorphs=z,rooted=r,missing=? <br> 12. stalk-surface-above-ring: fibrous=f,scaly=y,silky=k,smooth=s <br> 13. stalk-surface-below-ring: fibrous=f,scaly=y,silky=k,smooth=s <br> 14. stalk-color-above-ring: brown=n,buff=b,cinnamon=c,gray=g,orange=o,pink=p,red=e,white=w,yellow=y <br> 15. stalk-color-below-ring: brown=n,buff=b,cinnamon=c,gray=g,orange=o,pink=p,red=e,white=w,yellow=y <br> 16. veil-type: partial=p,universal=u <br> 17. veil-color: brown=n,orange=o,white=w,yellow=y <br> 18. ring-number: none=n,one=o,two=t <br> 19. ring-type: cobwebby=c,evanescent=e,flaring=f,large=l,none=n,pendant=p,sheathing=s,zone=z <br> 20. spore-print-color: black=k,brown=n,buff=b,chocolate=h,green=r,orange=o,purple=u,white=w,yellow=y <br> 21. population: abundant=a,clustered=c,numerous=n,scattered=s,several=v,solitary=y <br> 22. habitat: grasses=g,leaves=l,meadows=m,paths=p,urban=u,waste=w,woods=d <br>
headers = ['classif','cap_shape','cap_surface','cap_colour','bruises','odor','gill_attach','gill_space','gill_size', 'gill_color','stalk_shape','stalk_root','stalk_surf_above_ring','stalk_surf_below_ring', 'stalk_color_above_ring','stalk_color_below_ring','veil_type','veil_color','ring_number', 'ring_type','spore_print_color','population','habitat'] print(len(headers)) #Put the list of lists into dataframe and make sure everything look ok df = pd.DataFrame(list1, columns=headers) df.head() df.describe()
examples/dulybina/1_exploratory_analysis.ipynb
georgetown-analytics/machine-learning
mit
As expected, all of the values across all features are categorical. Later, I have encode those into numerical values.
table = pd.crosstab(index=df['classif'], columns="count") table fig = plt.figure(figsize=(2,2)) ax1 = fig.add_subplot(111) ax1.set_xlabel('classification') ax1.set_ylabel('count') ax1.set_title("By classification") df['classif'].value_counts().plot(kind='bar',color = '#4C72B0')
examples/dulybina/1_exploratory_analysis.ipynb
georgetown-analytics/machine-learning
mit