markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Alternatively, we can create different subplots for each time-series
fig, ax = plt.subplots(2, 2) # ax is now an array! ax[0, 0].plot(data[32, 32, 15, :]) ax[0, 1].plot(data[32, 32, 14, :]) ax[1, 0].plot(data[32, 32, 13, :]) ax[1, 1].plot(data[32, 32, 12, :]) ax[1, 0].set_xlabel('Time (TR)') ax[1, 1].set_xlabel('Time (TR)') ax[0, 0].set_ylabel('MRI signal (a.u.)') ax[1, 0].set_ylabel('M...
beginner-python/002-plots.ipynb
ohbm/brain-hacking-101
apache-2.0
Another kind of plot is an image. For example, we can take a look at the mean and standard deviation of the time-series for one entire slice:
fig, ax = plt.subplots(1, 2) # We'll use a reasonable colormap, and no smoothing: ax[0].matshow(np.mean(data[:, :, 15], -1), cmap=mpl.cm.hot) ax[0].axis('off') ax[1].matshow(np.std(data[:, :, 15], -1), cmap=mpl.cm.hot) ax[1].axis('off') fig.set_size_inches([12, 6]) # You can save the figure to file: fig.savefig('mean_a...
beginner-python/002-plots.ipynb
ohbm/brain-hacking-101
apache-2.0
There are many other kinds of figures you could create:
fig, ax = plt.subplots(2, 2) # Note the use of `ravel` to create a 1D array: ax[0, 0].hist(np.ravel(data)) ax[0, 0].set_xlabel("fMRI signal") ax[0, 0].set_ylabel("# voxels") # Bars are 0.8 wide: ax[0, 1].bar([0.6, 1.6, 2.6, 3.6], [np.mean(data[:, :, 15]), np.mean(data[:, :, 14]), np.mean(data[:, :, 13]), np.mean(data[...
beginner-python/002-plots.ipynb
ohbm/brain-hacking-101
apache-2.0
\newpage About me Research Fellow, University of Nottingham: orcid Director, Geolytics Limited - A spatial data analytics consultancy About this presentation Available on GitHub - https://github.com/AntArch/Presentations_Github/ Fully referenced PDF \newpage A potted history of mapping In the beginning was the geo...
from IPython.display import YouTubeVideo YouTubeVideo('jUzGF401vLc')
20150916_OGC_Reuse_under_licence/.ipynb_checkpoints/20150916_OGC_Reuse_under_licence-checkpoint_conflict-20150915-181258.ipynb
AntArch/Presentations_Github
cc0-1.0
Extraction rate vs. extraction size Current default is 25 spectra x 50 wavelengths extracted at a time. On both KNL and Haswell, we could do better with smaller sub-extractions.
xlabels = sorted(set(hsw['nspec'])) ylabels = sorted(set(knl['nwave'])) set_cmap('viridis') figure(figsize=(12,4)) subplot(121); plotimg(rate_hsw, xlabels, ylabels); title('Haswell rate') subplot(122); plotimg(rate_knl, xlabels, ylabels); title('KNL rate')
doc/extract-size.ipynb
sbailey/knltest
bsd-3-clause
3x improvement is possible Going to 5-10 spectra x 20 wavelengths gains a factor of 3x speed on both KNL and Haswell.
figure(figsize=(12,4)) subplot(121); plotimg(rate_hsw/rate_hsw[-1,-1], xlabels, ylabels); title('Haswell rate improvement') subplot(122); plotimg(rate_knl/rate_knl[-1,-1], xlabels, ylabels); title('KNL rate improvement')
doc/extract-size.ipynb
sbailey/knltest
bsd-3-clause
Haswell to KNL performance The best parameters for Haswell are ~7x faster than the best parameters for KNL, and for a given extraction size (nspec,nwave), Haswell is 5x-8x faster than KNL per process.
r = np.max(rate_hsw) / np.max(rate_knl) print("Haswell/KNL = {}".format(r)) plotimg(rate_hsw/rate_knl, xlabels, ylabels) title('Haswell / KNL performance')
doc/extract-size.ipynb
sbailey/knltest
bsd-3-clause
First we look into the era5-pds bucket zarr folder to find out what variables are available. Assuming that all the variables are available for all the years, we look into a random year-month data.
bucket = 'era5-pds' #Make sure you provide / in the end prefix = 'zarr/2008/01/data/' client = boto3.client('s3') result = client.list_objects(Bucket=bucket, Prefix=prefix, Delimiter='/') for o in result.get('CommonPrefixes'): print (o.get('Prefix')) client = Client() client fs = s3fs.S3FileSystem(anon=False)
api-examples/ERA5_zarr_example.ipynb
planet-os/notebooks
mit
Here we define some functions to read in zarr data.
def inc_mon(indate): if indate.month < 12: return datetime.datetime(indate.year, indate.month+1, 1) else: return datetime.datetime(indate.year+1, 1, 1) def gen_d_range(start, end): rr = [] while start <= end: rr.append(start) start = inc_mon(start) return rr def get...
api-examples/ERA5_zarr_example.ipynb
planet-os/notebooks
mit
This is where we read in the data. We need to define the time range and variable name. In this example, we also choose to select only the area over Australia.
%%time tmp_a = gen_zarr_range(datetime.datetime(1979,1,1), datetime.datetime(2020,3,31),'air_temperature_at_2_metres') tmp_all = xr.concat(tmp_a, dim='time0') tmp = tmp_all.air_temperature_at_2_metres.sel(lon=slice(110,160),lat=slice(-10,-45)) - 272.15
api-examples/ERA5_zarr_example.ipynb
planet-os/notebooks
mit
Here we read in an other variable. This time only for a month as we want to use it only for masking.
sea_data = gen_zarr_range(datetime.datetime(2018,1,1), datetime.datetime(2018,1,1),'sea_surface_temperature') sea_data_all = xr.concat(sea_data, dim='time0').sea_surface_temperature.sel(lon=slice(110,160),lat=slice(-10,-45))
api-examples/ERA5_zarr_example.ipynb
planet-os/notebooks
mit
We decided to use sea surface temperature data for making a sea-land mask.
sea_data_all0 = sea_data_all[0].values mask = np.isnan(sea_data_all0)
api-examples/ERA5_zarr_example.ipynb
planet-os/notebooks
mit
Mask out the data over the sea. To find out average temepratures over the land, it is important to mask out data over the ocean.
tmp_masked = tmp.where(mask) tmp_mean = tmp_masked.mean('time0').compute()
api-examples/ERA5_zarr_example.ipynb
planet-os/notebooks
mit
Now we plot the all time (1980-2019) average temperature over Australia. This time we decided to use only xarray plotting tools.
ax = plt.axes(projection=ccrs.Orthographic(130, -20)) tmp_mean.plot.contourf(ax=ax, transform=ccrs.PlateCarree()) ax.set_global() ax.coastlines(); plt.draw()
api-examples/ERA5_zarr_example.ipynb
planet-os/notebooks
mit
Now we are finding out yearly average temperature over the Australia land area.
yearly_tmp_AU = tmp_masked.groupby('time0.year').mean('time0').mean(dim=['lon','lat']) f, ax = plt.subplots(1, 1) yearly_tmp_AU.plot.line(); plt.draw()
api-examples/ERA5_zarr_example.ipynb
planet-os/notebooks
mit
Topic 4: Data Structures in Python 4.1 Sequences Strings, tuples, and lists are all considered sequences in Python, which is why there are many operations that work on all three of them. 4.1.1 Iterating
# When at the top of a loop, the 'in' keyword in Python will iterate through all of the sequence's # members in order. For strings, members are individual characters; for lists and tuples, they're # the items contained. # Task: Given a list of lowercase words, print whether the word has a vowel. Example: if the inpu...
events/code-at-night/archive/python_talk/intro_to_python_soln.ipynb
PrincetonACM/princetonacm.github.io
mit
Create an 'economy' object with N skill-level groups.
N = 10 # number of skill E = Economy(N) E.GAMMA = 0.8 E.ALPHA = 0.6 Xbar = [E.TBAR, E.LBAR]
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
Let's summarize the parameters as they now stand:
E.print_params()
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
When Lucas = False farms don't have to specialize in farm management. When True then as in Lucas (1978) household labor must be dedicated to management and labor must be hired. Assumed skill distribution Let's assume population is uniform across the N groups but skill is rising.
E.s = np.linspace(1,5,num=N) plt.title('skill distribution') plt.xlabel('group index') plt.plot(x,E.s,marker='o');
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
Not a Lucas economy
E.Lucas = False rwc, (Tc,Lc) = E.smallhold_eq([E.TBAR,E.LBAR],E.s) rwc plt.title('Eqn factor use') plt.xlabel('group index') plt.plot(x,Tc,marker='o'); plt.plot(x,Lc,marker='o'); plt.title('induced farm size (factor use) distribution') plt.plot(Tc,marker='o') plt.plot(Lc, marker='x'); E.excessD(rwc,Xbar,E.s) # sh...
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
A Lucas Economy
E.Lucas = True rwc_L, (Tc_L,Lc_L) = E.smallhold_eq([E.TBAR,E.LBAR],E.s)
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
In the Lucas equilibrium there is less unskilled labor (since managers cannot be laborers) so all else equal we would expect higher wages.
rwc, rwc_L
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
In this sample equilibrium the two lowest skill groups become pure laborers.
plt.title('Eqn factor use') plt.xlabel('group index') plt.plot(x,Tc_L, marker='o', linestyle='None',label='land') plt.plot(x,Lc_L, marker='o', linestyle='None', label='labor') plt.legend();
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
Note that now about 50% of labor is going into management. However this is an artifact of the unform distribution of skill which puts so much population in the higher skill groups.
E.LBAR-sum(Lc_L)
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
Note that the two economies (one Lucas style the other non-Lucas) are not comparable because they have fundamentally different production technologies.
E.prodn([Tc_L,Lc_L],E.s) sum(E.prodn([Tc_L,Lc_L],E.s)) E.prodn([Tc,Lc],E.s) sum(E.prodn([Tc,Lc],E.s))
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
Just out of interest, how much lower would output be in the Non-Lucas economy if every household were self-sufficient.
Tce = np.ones(N)*(E.TBAR/N) Lce = np.ones(N)*(E.LBAR/N) sum(E.prodn([Tce,Lce],E.s)) E.prodn([Tce,Lce],E.s) sum(rwc*[E.TBAR/N, E.LBAR/N])+E.prodn([Tce,Lce],E.s)- rwc*[Tc,Lc]
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
Market Power distortions
sum (Xrc,Xr,wc,wr) = scene_print(E,10, detail=True) factor_plot(E,Xrc,Xr)
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
Lucas = True
E.Lucas = True rwcl, (Tcl,Lcl) = E.smallhold_eq([E.TBAR,E.LBAR],E.s) Lcl E.excessD(rwcl,Xbar,E.s) plt.title('Competitive: Induced farm size (factor use) distribution') plt.plot(Tcl,marker='o',label='land (Lucas)') plt.plot(Lcl, marker='x',label='labor (Lucas)') plt.plot(Tc, '-o',label='land ') plt.plot(Lc, marker='x...
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
Not that the two economies are directly comparable (technologies are not the same)... but in the Lucas economy there will be less operating farms and a lower supply of farm labor (since the more skilled become full-time managers). The farms that do operate will therefore use more land and less labor compared to the no...
E.Lucas = True E.smallhold_eq([E.TBAR,E.LBAR/2], E.s)
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
Cartel equilibria
(Xrcl,Xrl,wcl,wrl) = scene_print(E, numS=10, detail=True)
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
Compared to the original scenario the landlord with marker power in the Lucas-scneario faces a countervailing force: if she pushes the wage too low then she makes self-managed production more attractive for a would-be medium sized farmer who is not now in production. From the solution it would appear that the beyond ...
E.Lucas = True factor_plot(E,Xrcl,Xrl) E.Lucas = False factor_plot(E,Xrc,Xr) E.Lucas = True E.cartel_eq(0.5) E.cartel_eq(0.6) E.cartel_eq(0.2) Lr
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
Something is still not right... labor used/demanded excees labor supply
(r,w),(Tr,Lr)= E.cartel_eq(0.5) sum(Lr),np.count_nonzero(Lr)*(E.LBAR)/E.N sum(Lr) fringe = E.smallhold_eq([E.TBAR,E.LBAR/2], E.s) fringe.w fringe.X fringe.X[0]
LucasSpanControl.ipynb
jhconning/geqfarm
gpl-3.0
We aim to minimize the output quantile. To do that, we create a q_func object from the function quantile_func to associate a probability $\alpha$ to a function that computes the empirical quantile from a given sample.
from dependence import quantile_func alpha = 0.05 q_func = quantile_func(alpha) indep_result.q_func = q_func
examples/archive/grid-search-Copy1.ipynb
NazBen/impact-of-dependence
mit
Grid Search Approach Firstly, we consider a grid search approach in order to compare the perfomance with the iterative algorithm. The discretization can be made on the parameter space or on other concordance measure such as the kendall's Tau. This below example shows a grid-search on the parameter space.
%%snakeviz K = 500 n = 10000 grid_type = 'lhs' dep_measure = 'parameter' grid_result = quant_estimate.gridsearch(n_dep_param=K, n_input_sample=n, grid_type=grid_type, dep_measure=dep_measure, random_state=random_state)
examples/archive/grid-search-Copy1.ipynb
NazBen/impact-of-dependence
mit
As for the individual problem, we can do a boostrap also, for each parameters. Because we have $K$ parameters, we can do a bootstrap for the $K$ samples, compute the $K$ quantiles for all the bootstrap and get the minimum quantile for each bootstrap.
grid_result.compute_bootstraps(n_bootstrap=5000) boot_min_quantiles = grid_result.bootstrap_samples.min(axis=0) boot_argmin_quantiles = grid_result.bootstrap_samples.argmin(axis=0).ravel().tolist() boot_min_params = [grid_result.dep_params[idx][0] for idx in boot_argmin_quantiles] fig, axes = plt.subplots(1, 2, figsiz...
examples/archive/grid-search-Copy1.ipynb
NazBen/impact-of-dependence
mit
Plotting the results from the visual search experiment The results (fabricated data) are stored in a csv file where each row corresponds to one trial (observation). Each trial contains information about the participant ID (p_id), the trial number (trial), the reaction time in ms (rt), the condition (BB or RB; they indi...
import pandas as pd import seaborn as sns import matplotlib.pyplot as plt # Read data into pandas dataframe df = pd.read_csv('img\\results_visual_search.csv', sep = '\t') print(df.head()) # Plot reaction times across participants sns.barplot(x = 'p_id', y = 'rt', data = df) plt.xlabel('Participant ID') plt.ylabel('R...
Week7_lecture.ipynb
marcus-nystrom/python_course
gpl-3.0
Perhaps it would make more sense construct the plot based on the average value of each participant from each trial and condition.
# Take the mean over different trials df_avg = df.groupby(['p_id', 'n_distractors', 'condition']).mean() df_avg.reset_index(inplace=True) # Plot reaction times across sns.factorplot(x="n_distractors", y="rt",hue='condition',data = df_avg) plt.xlabel('Number of distractors') plt.ylabel('Search time (ms)') plt.sho...
Week7_lecture.ipynb
marcus-nystrom/python_course
gpl-3.0
Derived Features Another common feature type are derived features, where some pre-processing step is applied to the data to generate features that are somehow more informative. Derived features may be based in dimensionality reduction (such as PCA or manifold learning), may be linear or nonlinear combinations of featu...
import os f = open(os.path.join('datasets', 'titanic', 'titanic3.csv')) print(f.readline()) lines = [] for i in range(3): lines.append(f.readline()) print(lines)
notebooks/03.6 Case Study - Titanic Survival.ipynb
Capepy/scipy_2015_sklearn_tutorial
cc0-1.0
The site linked here gives a broad description of the keys and what they mean - we show it here for completeness pclass Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd) survival Survival (0 = No; 1 = Yes) name Name sex Sex age Age sibsp ...
from helpers import process_titanic_line print(process_titanic_line(lines[0]))
notebooks/03.6 Case Study - Titanic Survival.ipynb
Capepy/scipy_2015_sklearn_tutorial
cc0-1.0
Now that we see the expected format from the line, we can call a dataset helper which uses this processing to read in the whole dataset. See helpers.py for more details.
from helpers import load_titanic keys, train_data, test_data, train_labels, test_labels = load_titanic( test_size=0.2, feature_skip_tuple=(), random_state=1999) print("Key list: %s" % keys)
notebooks/03.6 Case Study - Titanic Survival.ipynb
Capepy/scipy_2015_sklearn_tutorial
cc0-1.0
With all of the hard data loading work out of the way, evaluating a classifier on this data becomes straightforward. Setting up the simplest possible model, we want to see what the simplest score can be with DummyClassifier.
from sklearn.metrics import accuracy_score from sklearn.dummy import DummyClassifier clf = DummyClassifier('most_frequent') clf.fit(train_data, train_labels) pred_labels = clf.predict(test_data) print("Prediction accuracy: %f" % accuracy_score(pred_labels, test_labels))
notebooks/03.6 Case Study - Titanic Survival.ipynb
Capepy/scipy_2015_sklearn_tutorial
cc0-1.0
Exercice: Clean countries data and save it as a valid csv without header.
import pandas as pd import numpy as np df_country_raw = pd.read_csv("../data/countries_data.csv",sep=";") df_country_raw.head(15) df_country_raw.to_csv("../data/countries_data_clean.csv",header=False)
notes/99 - Exercices.ipynb
f-guitart/data_mining
gpl-3.0
Exercice: Build a function that generates a dataframe with N user id plus a list of a random number of random news topics from news_topics.csv
import pandas as pd import numpy as np def generate_users_df(num_users, num_topics): #generate num_users usernames usernames_df = pd.Series(["user"]*num_users).str.cat(pd.Series(np.arange(num_users)).map(str)) #read topics csv news_topics = pd.read_csv("../data/news_topics.csv",header=None) #gene...
notes/99 - Exercices.ipynb
f-guitart/data_mining
gpl-3.0
Exercice: Save the info generated with the previous function as csv so that it can be easily loaded as a Pair RDD in pyspark.
import csv M = 20 N = 1000 users_df = generate_users_df(N,M) users_df.to_csv("../data/users_events_example/user_info_%susers_%stopics.csv" % (N,M), columns=["username","topics"], header=None, index=None) #quoting=csv.QUOTE_MINIMAL)
notes/99 - Exercices.ipynb
f-guitart/data_mining
gpl-3.0
Exercice: Build a function that generates N csv files containing user's web browsing information. This function takes a max number of users M (from user0 to userM) and generates K user information logs for a randomly picked user (with repetition). The function will return this information with a timestamp. Each file r...
import datetime def generate_user_events(date_start, num_files, num_users, num_events): #generate usernames usernames_df = pd.Series(["user"]*num_users).str.cat(pd.Series(np.arange(num_users)).map(str)) #read topics news_topics = pd.read_csv("../data/news_topics.csv",header=None,lineterminator="\n").T ...
notes/99 - Exercices.ipynb
f-guitart/data_mining
gpl-3.0
Exercice: Generate a unique id for papers.lst file and save it as a csv file. Then generate a csv file with random references among papers from papers.lst.
import csv, re import pandas as pd import numpy as np f = file("../data/papers.lst","rb") papers = [] for idx,l in enumerate(f.readlines()): t = re.match("(\d+)(\s*)(.\d*)(\s*)(\w+)(\s*)(.*)",l) if t: #print "|",t.group(1),"|",t.group(3),"|",t.group(5),"|",t.group(7),"|" papers.append([t.group(...
notes/99 - Exercices.ipynb
f-guitart/data_mining
gpl-3.0
Exercice: Read "../data/country_info_worldbank.xls" and delete wrong rows and set proper column names.
import pandas as pd cc_df0 = pd.read_excel("../data/country_info_worldbank.xls") #delete unnececary rows cc_df1 = cc_df0[cc_df0["Unnamed: 2"].notnull()] #get columnames and set to dataframe colnames = cc_df1.iloc[0].tolist() colnames[0] = "Order" cc_df1.columns = colnames #delete void columns cc_df2 = cc_df1.loc[:,cc_...
notes/99 - Exercices.ipynb
f-guitart/data_mining
gpl-3.0
Exercice: Convert lat lon to UTM
import pandas as pd est_df = pd.read_csv("../data/estacions_meteo.tsv",sep="\t") est_df.head() est_df.columns = est_df.columns.str.lower().\ str.replace("\[codi\]","").\ str.replace("\(m\)","").str.strip() est_df.longitud = est_df.longitud.str.replace(",",".") est_df.latitud = e...
notes/99 - Exercices.ipynb
f-guitart/data_mining
gpl-3.0
Exercice: Convert to Technically Correct Data: iqsize.csv
import pandas as pd df = pd.read_csv("../data/iqsize.csv", na_values="n/a") df.dtypes #clean piq errors = pd.to_numeric(df.piq, errors="coerce") print df["piq"][errors.isnull()] df["piq"] = pd.to_numeric(df["piq"].str.replace("'",".")) df.dtypes errors = pd.to_numeric(df.height, errors="coerce") print df["height"][...
notes/99 - Exercices.ipynb
f-guitart/data_mining
gpl-3.0
Create a feature reader We create a feature reader to obtain minimal distances between all residues which are not close neighbours. Feel free to map these distances to binary contacts or use inverse minimal residue distances instead. These coices usually work quite well.
traj_files = [f for f in sorted(glob('../../../DESHAWTRAJS/CLN025-0-protein/CLN025-0-protein-*.dcd'))] pdb_file = '../../../DESHAWTRAJS/CLN025-0-protein/chig_pdb_166.pdb' features = pyemma.coordinates.featurizer(pdb_file) features.add_residue_mindist() source = pyemma.coordinates.source([traj_files], features=features...
test/Chignoling_FS_after_clustering.ipynb
ZuckermanLab/NMpathAnalysis
gpl-3.0
Discretization and MSM estimation We start the actual analysis with a TICA projection onto two components on which we perform a k-means clustering. Then, we take a quick view on the implied timescale convergence, the 2D representation, and the clustering:
tica = pyemma.coordinates.tica(data=source, lag=5, dim=2).get_output()[0] cluster = pyemma.coordinates.cluster_kmeans(tica, k=45, max_iter=100) lags = np.asarray([1, 5, 10, 20, 50] + [i * 100 for i in range(1, 21)]) fig, axes = plt.subplots(1, 2, figsize=(8, 4)) pyemma.plots.plot_implied_timescales( pyemma.msm.its...
test/Chignoling_FS_after_clustering.ipynb
ZuckermanLab/NMpathAnalysis
gpl-3.0
Agglomerative Clustering from the transition matrix Hierarchical agglomerative clustering using the Markovian commute time: $t_{ij} = \mathrm{MFPT}(i \rightarrow j)+ \mathrm{MFPT}(j \rightarrow i)$. IMPORTANT: The goal of this clusterig is tho identify macrostates and not to use the best lag-time, the lag-time use for ...
#lag_to_use = [1, 10, 100, 1000] lag_to_use = [1] lag_index = [ get_lagtime_from_array(lags, element*0.2)[0] for element in lag_to_use ] # This are the t_cut intervals to explore (in lag-time units) with the lag times in "lag_to_use" #range_per_lag = [[200,600], [200,350], [100,250], [30,200]] range_per_lag = [[400,9...
test/Chignoling_FS_after_clustering.ipynb
ZuckermanLab/NMpathAnalysis
gpl-3.0
Clustering
for k, index in enumerate(lag_index): K = msm[index].P dt = 0.2 #--------------------- printmd("### Lag-time: "+str(dt)+"ns") t_min_list=[] t_max_list=[] t_AB_list=[] big_clusters_list = [] # t_cut range min_ = range_per_lag[k][0] max_ = range_per_lag[k][1] interval = ...
test/Chignoling_FS_after_clustering.ipynb
ZuckermanLab/NMpathAnalysis
gpl-3.0
Selecting t_cut = 119ns for mfpts calculations
dt = 0.0002 # in micro-sec if 0 in big_clusters_list[12][1]: stateA = big_clusters_list[12][0] #Unfolded stateB = big_clusters_list[12][1] #Folded else: stateA = big_clusters_list[12][1] #Unfolded stateB = big_clusters_list[12][0] #Folded lag_to_use = lags[0:16:2] lag_index = [ get_lagtime_from_a...
test/Chignoling_FS_after_clustering.ipynb
ZuckermanLab/NMpathAnalysis
gpl-3.0
Question 1 - Feature Observation As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood): - 'RM' is the average number of rooms among homes in the neighborhood. - 'LSTAT' is the percentage of homeowners in the neighborhood considered "...
from sklearn.metrics import r2_score def performance_metric(y_true, y_predict): """ Calculates and returns the performance score between true and predicted values based on the metric chosen. """ # TODO: Calculate the performance score between 'y_true' and 'y_predict' score = r2_score(y_true, ...
boston_housing/boston_housing.ipynb
alirsamar/MLND
mit
Answer: It seems that the model works fine in making predictions since it has successfully captured the variation of the target variable since predictions are pretty close to true values which is confirmed by R^2 score of 0.923 being close to 1. Implementation: Shuffle and Split Data Your next implementation requires t...
# TODO: Import 'train_test_split' from sklearn.cross_validation import train_test_split # TODO: Shuffle and split the data into training and testing subsets X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=0) # Success print "Training and testing split was successful."
boston_housing/boston_housing.ipynb
alirsamar/MLND
mit
Question 3 - Training and Testing What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm? Hint: What could go wrong with not having a way to test your model? Answer: One of the reasons we need testing subset is to check if the model is overfitting the traini...
# Produce learning curves for varying training set sizes and maximum depths vs.ModelLearning(features, prices)
boston_housing/boston_housing.ipynb
alirsamar/MLND
mit
Question 4 - Learning the Data Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model? Hint: Are the learning curves converging to parti...
vs.ModelComplexity(X_train, y_train)
boston_housing/boston_housing.ipynb
alirsamar/MLND
mit
Question 5 - Bias-Variance Tradeoff When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions? Hint: How do you know when a model is suffering fro...
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV' from sklearn.cross_validation import ShuffleSplit from sklearn.metrics import make_scorer from sklearn.tree import DecisionTreeRegressor from sklearn.grid_search import GridSearchCV def fit_model(X, y): """ Performs grid search over the 'max...
boston_housing/boston_housing.ipynb
alirsamar/MLND
mit
Answer: The optimal model have maximum depth of 4. My initial guess was right that maximum depth of 4 is the optimal parameter for the model. Question 10 - Predicting Selling Prices Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that the...
# Produce a matrix for client data client_data = [[5, 17, 15], # Client 1 [4, 32, 22], # Client 2 [8, 3, 12]] # Client 3 # Show predictions for i, price in enumerate(reg.predict(client_data)): print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
boston_housing/boston_housing.ipynb
alirsamar/MLND
mit
Answer: So predicted selling prices would be: Client 1 to sell for USD 391,183.33, Client 2 to sell for USD 189,123.53 and Client 3 to sell for USD 942,666.67. These prices look reasonable if we consider the number of rooms each house has and other parameters discussed earlier. The student to teacher ratio is lower as...
vs.PredictTrials(features, prices, fit_model, client_data)
boston_housing/boston_housing.ipynb
alirsamar/MLND
mit
First, we must connect to our data cube. We can then query the contents of the data cube we have connected to, including both the metadata and the actual data.
dc = datacube.Datacube(app='dc-water-analysis') api = datacube.api.API(datacube=dc)
dc_notebooks/water_detection_3.ipynb
ceos-seo/Data_Cube_v2
apache-2.0
Obtain the metadata of our cube... Initially, we need to get the platforms and products in the cube. The rest of the metadata will be dependent on these two options.
# Get available products products = dc.list_products() platform_names = list(set(products.platform)) product_names = list(products.name)
dc_notebooks/water_detection_3.ipynb
ceos-seo/Data_Cube_v2
apache-2.0
Execute the following code and then use the generated form to choose your desired platfrom and product.
product_values = create_platform_product_gui(platform_names, product_names)
dc_notebooks/water_detection_3.ipynb
ceos-seo/Data_Cube_v2
apache-2.0
With the platform and product, we can get the rest of the metadata. This includes the resolution of a pixel, the latitude/longitude extents, and the minimum and maximum dates available of the chosen platform/product combination.
# Save the form values platform = product_values[0].value product = product_values[1].value # Get the pixel resolution of the selected product resolution = products.resolution[products.name == product] lat_dist = resolution.values[0][0] lon_dist = resolution.values[0][1] # Get the extents of the cube descriptor = api...
dc_notebooks/water_detection_3.ipynb
ceos-seo/Data_Cube_v2
apache-2.0
Execute the following code and then use the generated form to choose the extents of your desired data.
extent_values = create_extents_gui(min_date_str, max_date_str, min_lon_rounded, max_lon_rounded, min_lat_rounded, max_lat_rounded)
dc_notebooks/water_detection_3.ipynb
ceos-seo/Data_Cube_v2
apache-2.0
Now that we have filled out the above two forms, we have enough information to query our data cube. The following code snippet ends with the actual Data Cube query, which will return the dataset with all the data matching our query.
# Save form values start_date = datetime.strptime(extent_values[0].value, '%Y-%m-%d') end_date = datetime.strptime(extent_values[1].value, '%Y-%m-%d') min_lon = extent_values[2].value max_lon = extent_values[3].value min_lat = extent_values[4].value max_lat = extent_values[5].value # Query the Data Cube dataset_in = d...
dc_notebooks/water_detection_3.ipynb
ceos-seo/Data_Cube_v2
apache-2.0
At this point, we have finished accessing our data cube and we can turn to analyzing our data. In this example, we will run the WOfS algorithm. The wofs_classify function, seen below, will return a modified dataset, where a value of 1 indicates the pixel has been classified as water by the WoFS algorithm and 0 represen...
water_class = wofs_classify(dataset_in)
dc_notebooks/water_detection_3.ipynb
ceos-seo/Data_Cube_v2
apache-2.0
Execute the following code and then use the generated form to choose your desired acquisition date. The following two code blocks are only necessary if you would like to see the water mask of a single acquisition date.
acq_dates = list(water_class.time.values.astype(str)) acq_date_input = create_acq_date_gui(acq_dates) # Save form value acq_date = acq_date_input.value acq_date_index = acq_dates.index(acq_date) # Get water class for selected acquisition date and mask no data values water_class_for_acq_date = water_class.wofs[acq_dat...
dc_notebooks/water_detection_3.ipynb
ceos-seo/Data_Cube_v2
apache-2.0
With all of the pixels classified as either water/non-water, let's perform a time series analysis over our derived water class. The function, perform_timeseries_analysis, takes in a dataset of 3 dimensions (time, latitude, and longitude), then sums the values of each pixel over time. It also keeps track of the number o...
time_series = perform_timeseries_analysis(water_class)
dc_notebooks/water_detection_3.ipynb
ceos-seo/Data_Cube_v2
apache-2.0
The following plots visualize the results of our timeseries analysis. You may change the color scales with the cmap option. For color scales available for use by cmap, see http://matplotlib.org/examples/color/colormaps_reference.html. You can also define discrete color scales by using the levels and colors. For example...
normalized_water_observations_plot = time_series.normalized_data.plot(cmap='dc_au_WaterSummary') total_water_observations_plot = time_series.total_data.plot(cmap='dc_au_WaterObservations') total_clear_observations_plot = time_series.total_clean.plot(cmap='dc_au_ClearObservations')
dc_notebooks/water_detection_3.ipynb
ceos-seo/Data_Cube_v2
apache-2.0
Cyclic Rotation
def solution(A,K): if len(A) == 0: return A elif K >= len(A): M = K - (K/len(A)) * len(A) return A[-(M):] + A[:-(M)] else: return A[-(K):] + A[:-(K)] print solution(A,K)
2. Arrays/Cyclic Rotation.ipynb
SimplifyData/Codality
mit
File I/O Fixing encoding in CSV file The file nobel-prize-winners.csv contains some odd-looking characters in the name-column, such as '&egrave;'. These are the HTML codes for characters outside the limited ASCII set. Python is very capable at Unicode/UTF-8, so let's convert the characters to something more pleasant to...
import html # part of the Python 3 standard library with open('nobel-prize-winners.csv', 'rt') as fp: orig = fp.read() # read the entire file as a single hunk of text orig[727:780] # show some characters, note the '\n' print(orig[727:780]) # see how the '\n' gets converted to a newline
src/00-Solutions-to-exercises.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
With some Googling, we find this candidate function to fix the character
html.unescape? fixed = html.unescape(orig) # one line, less than a second... print(fixed[727:780]) # much better with open('nobel-prize-winners-fixed.csv', 'wt') as fp: fp.write(fixed) # write back to disk, and we're done!
src/00-Solutions-to-exercises.ipynb
MadsJensen/intro_to_scientific_computing
bsd-3-clause
Part B Write a function which takes a NumPy array and returns another NumPy array with all its elements squared. Your function should: be named squares take 1 argument: a NumPy array return 1 value: a NumPy array where each element is the squared version of the input array You cannot use any loops, built-in functions...
import numpy as np np.random.seed(13735) x1 = np.random.random(10) y1 = np.array([ 0.10729775, 0.01234453, 0.37878359, 0.12131263, 0.89916465, 0.50676134, 0.9927178 , 0.20673811, 0.88873398, 0.09033156]) np.testing.assert_allclose(y1, squares(x1), rtol = 1e-06) np.random.seed(7853) x2 = np.random.rand...
assignments/A4/A4_Q2.ipynb
eds-uga/csci1360e-su17
mit
Part C Write a function which computes the sum of the elements of a NumPy array. Your function should: be named sum_of_elements take 1 argument: a NumPy array return 1 floating-point value: the sum of the elements in the NumPy array You cannot use any loops, but you can use the numpy.sum function.
import numpy as np np.random.seed(7631) x1 = np.random.random(483) s1 = 233.48919473752667 np.testing.assert_allclose(s1, sum_of_elements(x1)) np.random.seed(13275) x2 = np.random.random(23) s2 = 12.146235770777777 np.testing.assert_allclose(s2, sum_of_elements(x2))
assignments/A4/A4_Q2.ipynb
eds-uga/csci1360e-su17
mit
Part D You may not have realized it yet, but in the previous three parts, you've implemented almost all of what's needed to compute the Euclidean distance between two vectors, as represented with NumPy arrays. All you have to do now is link the code you wrote in the previous three parts together in the right order. Wri...
import numpy as np import numpy.linalg as nla np.random.seed(477582) x11 = np.random.random(10) x12 = np.random.random(10) np.testing.assert_allclose(nla.norm(x11 - x12), distance(x11, x12)) np.random.seed(54782) x21 = np.random.random(584) x22 = np.random.random(584) np.testing.assert_allclose(nla.norm(x21 - x22), d...
assignments/A4/A4_Q2.ipynb
eds-uga/csci1360e-su17
mit
Part E Now, you'll use your distance function to find the pair of vectors that are closest to each other. This is a very, very common problem in data science: finding a data point that is most similar to another data point. In this problem, you'll write a function that takes two arguments: the data point you have (we'l...
import numpy as np r1 = np.array([1, 1]) l1 = [np.array([1, 1]), np.array([2, 2]), np.array([3, 3])] a1 = 0.0 np.testing.assert_allclose(a1, similarity_search(r1, l1)) np.random.seed(7643) r2 = np.random.random(2) * 100 l2 = [np.random.random(2) * 100 for i in range(100)] a2 = 1.6077074397123927 np.testing.assert_al...
assignments/A4/A4_Q2.ipynb
eds-uga/csci1360e-su17
mit
Input
from sklearn.datasets import load_files corpus = load_files("../data/") doc_count = len(corpus.data) print("Doc count:", doc_count) assert doc_count is 56, "Wrong number of documents loaded, should be 56 (56 stories)"
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Vectorizer
from helpers.tokenizer import TextWrangler from sklearn.feature_extraction.text import CountVectorizer bow = CountVectorizer(strip_accents="ascii", tokenizer=TextWrangler(kind="lemma")) X_bow = bow.fit_transform(corpus.data)
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Decided for BOW vectors, containing lemmatized words. BOW results (in this case) in better cluster performance than with tf-idf vectors. Lemmatization worked slightly better than stemming. (-> KElbow plots in plots/ dir). Models
from sklearn.cluster import KMeans kmeans = KMeans(n_jobs=-1, random_state=23) from yellowbrick.cluster import KElbowVisualizer viz = KElbowVisualizer(kmeans, k=(2, 28), metric="silhouette") viz.fit(X_bow) #viz.poof(outpath="plots/KElbow_bow_lemma_silhoutte.png") viz.poof() from yellowbrick.cluster import Silhouett...
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Decided for 3 clusters, because of highest avg Silhoutte score compared to other cluster sizes.
from yellowbrick.cluster import SilhouetteVisualizer n_clusters = 3 model = KMeans(n_clusters=n_clusters, n_jobs=-1, random_state=23) viz = SilhouetteVisualizer(model) viz.fit(X_bow) viz.poof()
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Nonetheless, the assignment isn't perfect. Cluster #1 looks good, but the many negative vals in cluster #0 & #1 suggest that there exist a cluster with more similar docs than in the actual assigned cluster. As a cluster size of 2 also leads to an inhomogen cluster and has a lower avg Silhoutte score, we go with the siz...
from sklearn.pipeline import Pipeline pipe = Pipeline([("bow", bow), ("kmeans", model)]) pipe.fit(corpus.data) pred = pipe.predict(corpus.data)
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Evaluation Cluster density Silhoutte coefficient: [-1,1], where 1 is most dense and negative vals correspond to ill seperation.
from sklearn.metrics import silhouette_score print("Avg Silhoutte score:", silhouette_score(X_bow, pred), "(novel collections)")
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Compared to original collections by Sir Arthur Conan Doyle:
print("AVG Silhoutte score", silhouette_score(X_bow, corpus.target), "(original collections)")
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Average Silhoutte coefficient is at least slightly positive and much better than the score of the original assignment (which is even negative). Success. Visual Inspection We come from the original assignment by Sir Arthur Conan Doyle...
from yellowbrick.text import TSNEVisualizer # Map target names of original collections to target vals collections_map = {} for i, collection_name in enumerate(corpus.target_names): collections_map[i] = collection_name # Plot tsne_original = TSNEVisualizer() labels = [collections_map[c] for c in corpus.target] tsn...
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
... to the novel collection assignment:
# Plot tsne_novel = TSNEVisualizer() labels = ["c{}".format(c) for c in pipe.named_steps.kmeans.labels_] tsne_novel.fit(X_bow, labels) tsne_novel.poof()
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Confirms the findings from the Silhoutte plot above (in the Models section), cluster #1 looks very coherent, cluster #2 is seperated and the two documents of cluster #0 fly somewhere around. Nonetheless, compared to the original collection, this looks far better. Success. Document-Cluster Assignment Finally, we want to...
# Novel titles, can be more creative ;> novel_collections_map = {0: "The Unassignable Adventures of Cluster 0", 1: "The Adventures of Sherlock Holmes in Cluster 1", 2: "The Case-Book of Cluster 2"}
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Let's see how the the books are differently assigned to collections by Sir Arthur Conan Doyle (Original Collection), respectively by the clustering algo (Novel Collection).
orig_assignment = [collections_map[c] for c in corpus.target] novel_assignment = [novel_collections_map[p] for p in pred] titles = [" ".join(f_name.split("/")[-1].split(".")[0].split("_")) for f_name in corpus.filenames] # Final df, compares original with new assignment df_documents = pd.DataFrame([orig_as...
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
Collections are uneven assigned. Cluster #1 is the predominant one. Looks like cluster #0 subsume the (rational) unassignable stories. T-SNE plot eventually looks like that:
tsne_novel_named = TSNEVisualizer(colormap="Accent") tsne_novel_named.fit(X_bow, novel_assignment) tsne_novel_named.poof(outpath="plots/Novel_Sherlock_Holmes_Collections.png")
HolmesClustering/holmes_clustering/notebook/2_Modeling.ipynb
donK23/pyData-Projects
apache-2.0
The following box is useless if you're not using a notebook - they just enable the online notebook drawing stuff.
#the following are to do with this interactive notebook code %matplotlib inline from matplotlib import pyplot as plt # this lets you draw inline pictures in the notebooks import pylab # this allows you to control figure size pylab.rcParams['figure.figsize'] = (10.0, 8.0) # this controls figure size in the notebook
0 The I don't have a notebook notebook.ipynb
handee/opencv-gettingstarted
mit
Its three main tables are vis.Session, vis.Condition, and vis.Trial. Furthermore vis.Condition has many tables below specifying parameters specific to each type of stimulus condition.
(dj.ERD(Condition)+1+Session+Trial).draw()
jupyter/tutorial/pipeline_vis.ipynb
fabiansinz/pipeline
lgpl-3.0
Each vis.Session comprises multiple trials that each has only one condition. The trial has timing information and refers to the general stimulus condition. The type of condition is determined by the dependent tables of vis.Condition (e.g. vis.Monet, vis.Trippy, vis.MovieClipCond) that describe details that are specif...
(dj.ERD(MovieClipCond)+Monet-1).draw()
jupyter/tutorial/pipeline_vis.ipynb
fabiansinz/pipeline
lgpl-3.0
Distribution of constructiveness (Check if it's skewed)
df['constructive_nominal'] = df['constructive'].apply(nominalize_constructiveness) cdict = df['constructive_nominal'].value_counts().to_dict() # Plot constructiveness distribution in the data # The slices will be ordered and plotted counter-clockwise. labels = 'Constructive', 'Non constructive', 'Not sure' items =[cd...
constructiveness_toxicity_crowdsource/jupyter-notebooks/sanity_tests/sanity_test_crowd_annotations.ipynb
conversationai/conversationai-crowdsource
apache-2.0
Distribution of toxicity (Check if skewed)
df['crowd_toxicity_level_nominal'] = df['crowd_toxicity_level'].apply(nominalize_toxicity) # Plot toxicity distribution with context (avg score) toxicity_counts_dict = {'Very toxic':0, 'Toxic':0, 'Mildly toxic':0, 'Not toxic':0} toxicity_counts_dict.update(df['crowd_toxicity_level_nominal'].value_counts().to_dict()) p...
constructiveness_toxicity_crowdsource/jupyter-notebooks/sanity_tests/sanity_test_crowd_annotations.ipynb
conversationai/conversationai-crowdsource
apache-2.0
Distribution of toxicity in constructive and non-constructive comments (Check if the dists are very different) Plot toxicity distribution for constructive comments
toxicity_column_name = 'crowd_toxicity_level_nominal' constructive_very_toxic = df[(df['constructive_nominal'] == 'yes') & (df[toxicity_column_name] == 'Very toxic')].shape[0] print('Constructive very toxic: ', constructive_very_toxic) constructive_toxic = df[(df['constructive_nominal'] == 'yes') & (df[toxicity_column...
constructiveness_toxicity_crowdsource/jupyter-notebooks/sanity_tests/sanity_test_crowd_annotations.ipynb
conversationai/conversationai-crowdsource
apache-2.0
Plot toxicity distribution for non-constructive comments
# Plot toxicity (with context) distribution for non constructive comments nconstructive_very_toxic = df[(df['constructive_nominal'] == 'no') & (df[toxicity_column_name] == 'Very toxic')].shape[0] print('Non constructive very toxic: ', nconstructive_very_toxic) nconstructive_toxic = df[(df['constructive_nominal'] == 'no...
constructiveness_toxicity_crowdsource/jupyter-notebooks/sanity_tests/sanity_test_crowd_annotations.ipynb
conversationai/conversationai-crowdsource
apache-2.0
Plot toxicity distribution for ambiguous comments
# Plot toxicity (with context) distribution for ambiguous comments ns_very_toxic = df[(df['constructive_nominal'] == 'not_sure') & (df[toxicity_column_name] == 'Very toxic')].shape[0] print('Ambiguous very toxic: ', ns_very_toxic) ns_toxic = df[(df['constructive_nominal'] == 'not_sure') & (df[toxicity_column_name] == '...
constructiveness_toxicity_crowdsource/jupyter-notebooks/sanity_tests/sanity_test_crowd_annotations.ipynb
conversationai/conversationai-crowdsource
apache-2.0
Check how the annotators did on internal gold questions
# Starting from batch three we have included internal gold questions for toxicity and constructiveness (20 each) with # internal_gold_constructiveness flag True. In the code below, we examine to what extent the annotators agreed with # these internal gold questions. # Get a subset dataframe with internal gold quest...
constructiveness_toxicity_crowdsource/jupyter-notebooks/sanity_tests/sanity_test_crowd_annotations.ipynb
conversationai/conversationai-crowdsource
apache-2.0
2. RL-Algorithms based on Temporal Difference TD(0) 2a. Load the "Temporal Difference" Python class Load the Python class PlotUtils() which provides various plotting utilities and start a new instance.
%run ../PlotUtils.py plotutls = PlotUtils()
Reinforcement-Learning/TD0-models/02.CliffWalking.ipynb
tgrammat/ML-Data_Challenges
apache-2.0