markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
A First Application: Classifying Iris Species Meet the Data The data we will use for this example is the Iris dataset, which is a commonly used dataset in machine learning and statistics tutorials. The Iris dataset is included in scikit-learn in the datasets module. We can load it by calling the load_iris function.
from sklearn.datasets import load_iris iris_dataset = load_iris() iris_dataset
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
The iris object that is returned by load_iris is a Bunch object, which is very similar to a dictionary. It contains keys and values:
print("Keys of iris_dataset: \n{}".format(iris_dataset.keys()))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
The value of the key DESCR is a short description of the dataset.
print(iris_dataset['DESCR'][:193] + "\n...")
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
The value of the key target_names is an array of strings, containing the species of flower that we want to predict.
print("Target names: {}".format(iris_dataset['target_names']))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
The value of feature_names is a list of strings, giving the description of each feature:
print("Feature names: \n{}".format(iris_dataset['feature_names']))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
The data itself is contained in the target and data fields. data contains the numeric measurements of sepal length, sepal width, petal length, and petal width in a NumPy array:
print("Type of data: {}".format(type(iris_dataset['data'])))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
The rows in the data array correspond to flowers, while the columns represent the four measurements that were taken for each flower.
print("Shape of data: {}".format(iris_dataset['data'].shape))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
The shape of the data array is the number of samples (flowers) multiplied by the number of features (properties, e.g. sepal width). Here are the feature values for the first five samples:
print("First five rows of data:\n{}".format(iris_dataset['data'][:5]))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
The data tells us that all of the first five flowers have a petal width of 0.2 cm and that the first flower has the longest sepal (5.1 cm) The target array contains the species of each of the flowers that were measured, also as a NumPy array:
print("Type of target: {}".format(type(iris_dataset['target'])))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
target is a one-dimensional array, with one entry per flower:
print("Shape of target: {}".format(iris_dataset['target'].shape))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
The species are encoded as integers from 0 to 2:
print("Target:\n{}".format(iris_dataset['target']))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
The meanings of the numbers are given by the iris['target_names'] array: 0 means setosa, 1 means versicolor, and 2 means virginica. Measuring Success: Training and Testing Data We want to build a machine learning model from this data that can predict the species of iris for a new set of measurements. To assess the mode...
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( iris_dataset['data'], iris_dataset['target'], random_state=0) # The random_state parameter gives the pseudorandom number generator a fixed (set) seed. # Setting the seed allows us to obtain reproducible result...
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
The output of the train_test_split function is X_train, X_test, y_train, and y_test, which are all NumPy arrays. X_train contains 75% of the rows in the dataset, and X_test contains the remaining 25%.
print("X_train shape: \n{}".format(X_train.shape)) print("y_train shape: \n{}".format(y_train.shape)) print("X_test shape: \n{}".format(X_test.shape)) print("y_test shape: \n{}".format(y_test.shape))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
First Things First: Look at Your Data Before building a machine learning model, it is often a good idea to inspect the data for several reasons: - so you can see if the task can be solved without machine learning. - so you can see if the desired information is contained in the data or not. - so you can detect abnormali...
# Create dataframe from data in X_train. # Label the columns using the strings in iris_dataset.feature_names. iris_dataframe = pd.DataFrame(X_train, columns=iris_dataset.feature_names) # Create a scatter matrix from the dataframe, color by y_train. pd.plotting.scatter_matrix(iris_dataframe, c=y_train, figsize=(15, 15),...
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
From the plots, we can see that the three classes seem to be relatively well separated using the sepal and petal measurements. This means that a machine learning model will likely be able to learn to separate them. Building Your First Model: k-Nearest Neighbors There are many classification algorithms in scikit-learn t...
from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=1)
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
The knn object encapsulates the algorithm that will be used to build the model from the training data, as well as the algorithm to make predictions on new data points. It will also hold the information that the algorithm has extracted from the training data. In the case of KNeighborsClassifier, it will just store the t...
knn.fit(X_train, y_train)
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
The fit method returns the knn object itself (and modifies it in place), so we get a string representation of our classifier. The representation shows us which parameters were used in creating the model. Nearly all of them are the default values, but you can also find n_neighbors=1, which is the parameter that we pass...
X_new = np.array([[5, 2.9, 1, 0.2]]) print("X_new.shape: \n{}".format(X_new.shape))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
Note that we made the measurements of this single flower into a row in a two-dimensional NumPy array. scikit-learn always expects two-dimensional arrays for the data. Now, to make a prediction, we call the predict method of the knn object:
prediction = knn.predict(X_new) print("Prediction: \n{}".format(prediction)) print("Predicted target name: \n{}".format( iris_dataset['target_names'][prediction]))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
Our model predicts that this new iris belongs to the class 0, meaning its species is setosa. How do we know whether we can trust our model? We don't know the correct species of this sample, which is the whole point of building the model. Evaluating the Model This is where the test set that we created earlier comes into...
y_pred = knn.predict(X_test) print("Test set predictions: \n{}".format(y_pred)) print("Test set score: \n{:.2f}".format(np.mean(y_pred == y_test)))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
We can also use the score method of the knn object, which will compute the test set accuracy for us:
print("Test set score: \n{:.2f}".format(knn.score(X_test, y_test)))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
For this model, the test set accuracy is about 0.97, which means that we made the correct prediction for 97% of the irises in the test set. In later chapters we will discuss how we can improve performance, and what caveats there are in tuning a model. Summary and Outlook Here is a summary of the code needed for the who...
X_train, X_test, y_train, y_test = train_test_split( iris_dataset['data'], iris_dataset['target'], random_state=0) knn = KNeighborsClassifier(n_neighbors=1) knn.fit(X_train, y_train) print("Test set score: \n{:.2f}".format(knn.score(X_test, y_test)))
introduction_to_ml_with_python/1_Introduction.ipynb
bgroveben/python3_machine_learning_projects
mit
As well as our function to read the hdf5 reflectance files and associated metadata
def h5refl2array(h5_filename): hdf5_file = h5py.File(h5_filename,'r') #Get the site name file_attrs_string = str(list(hdf5_file.items())) file_attrs_string_split = file_attrs_string.split("'") sitename = file_attrs_string_split[1] refl = hdf5_file[sitename]['Reflectance'] reflArray = refl['...
code/Python/uncertainty/hyperspectral-validation.ipynb
NEONInc/NEON-Data-Skills
gpl-2.0
Define the location where you are holding the data for the data institute. The h5_filename will be the flightline which contains the tarps, and the tarp_48_filename and tarp_03_filename contain the field validated spectra for the white and black tarp respectively, organized by wavelength and reflectance.
print('Start CHEQ tarp uncertainty script') h5_filename = 'C:/RSDI_2017/data/CHEQ/H5/NEON_D05_CHEQ_DP1_20160912_160540_reflectance.h5' tarp_48_filename = 'C:/RSDI_2017/data/CHEQ/H5/CHEQ_Tarp_48_01_refl_bavg.txt' tarp_03_filename = 'C:/RSDI_2017/data/CHEQ/H5/CHEQ_Tarp_03_02_refl_bavg.txt'
code/Python/uncertainty/hyperspectral-validation.ipynb
NEONInc/NEON-Data-Skills
gpl-2.0
We want to pull the spectra from the airborne data from the center of the tarp to minimize any errors introduced by infiltrating light in adjecent pixels, or through errors in ortho-rectification (source 2). We have pre-determined the coordinates for the center of each tarp which are as follows: 48% reflectance tarp UT...
tarp_48_center = np.array([727487,5078970]) tarp_03_center = np.array([727497,5078970])
code/Python/uncertainty/hyperspectral-validation.ipynb
NEONInc/NEON-Data-Skills
gpl-2.0
Now we'll use our function designed for NEON AOP's HDF5 files to access the hyperspectral data
[reflArray,metadata,wavelengths] = h5refl2array(h5_filename)
code/Python/uncertainty/hyperspectral-validation.ipynb
NEONInc/NEON-Data-Skills
gpl-2.0
Within the reflectance curves there are areas with noisey data due to atmospheric windows in the water absorption bands. For this exercise we do not want to plot these areas as they obscure detailes in the plots due to their anamolous values. The meta data assocaited with these band locations is contained in the metada...
bad_band_window1 = (metadata['bad_band_window1']) bad_band_window2 = (metadata['bad_band_window2']) index_bad_window1 = [i for i, x in enumerate(wavelengths) if x > bad_band_window1[0] and x < bad_band_window1[1]] index_bad_window2 = [i for i, x in enumerate(wavelengths) if x > bad_band_window2[0] and x < bad_band_win...
code/Python/uncertainty/hyperspectral-validation.ipynb
NEONInc/NEON-Data-Skills
gpl-2.0
Now join the list of indexes together into a single variable
index_bad_windows = index_bad_window1+index_bad_window2
code/Python/uncertainty/hyperspectral-validation.ipynb
NEONInc/NEON-Data-Skills
gpl-2.0
The reflectance data is saved in files which are 'tab delimited.' We will use a numpy function (genfromtxt) to quickly import the tarp reflectance curves observed with the ASD using the '\t' delimeter to indicate tabs are used.
tarp_48_data = np.genfromtxt(tarp_48_filename, delimiter = '\t') tarp_03_data = np.genfromtxt(tarp_03_filename, delimiter = '\t')
code/Python/uncertainty/hyperspectral-validation.ipynb
NEONInc/NEON-Data-Skills
gpl-2.0
Now we'll set all the data inside of those windows to NaNs (not a number) so they will not be included in the plots
tarp_48_data[index_bad_windows] = np.nan tarp_03_data[index_bad_windows] = np.nan
code/Python/uncertainty/hyperspectral-validation.ipynb
NEONInc/NEON-Data-Skills
gpl-2.0
The next step is to determine which pixel in the reflectance data belongs to the center of each tarp. To do this, we will subtract the tarp center pixel location from the upper left corner pixels specified in the map info of the H5 file. This information is saved in the metadata dictionary output from our function that...
x_tarp_48_index = int((tarp_48_center[0] - metadata['ext_dict']['xMin'])/float(metadata['res']['pixelWidth'])) y_tarp_48_index = int((metadata['ext_dict']['yMax'] - tarp_48_center[1])/float(metadata['res']['pixelHeight'])) x_tarp_03_index = int((tarp_03_center[0] - metadata['ext_dict']['xMin'])/float(metadata['res']['...
code/Python/uncertainty/hyperspectral-validation.ipynb
NEONInc/NEON-Data-Skills
gpl-2.0
Next, we will plot both the curve from the airborne data taken at the center of the tarps as well as the curves obtained from the ASD data to provide a visualisation of thier consistency for both tarps. Once generated, we will also save the figure to a pre-determined location.
plt.figure(1) tarp_48_reflectance = np.asarray(reflArray[y_tarp_48_index,x_tarp_48_index,:], dtype=np.float32)/metadata['scaleFactor'] tarp_48_reflectance[index_bad_windows] = np.nan plt.plot(wavelengths,tarp_48_reflectance,label = 'Airborne Reflectance') plt.plot(wavelengths,tarp_48_data[:,1], label = 'ASD Reflectance...
code/Python/uncertainty/hyperspectral-validation.ipynb
NEONInc/NEON-Data-Skills
gpl-2.0
This produces plots showing the results of the ASD and airborne measurements over the 48% tarp. Visually, the comparison between the two appears to be fairly good. However, over the 3% tarp we appear to be over-estimating the reflectance. Large absolute differences could be associated with ATCOR input parameters (sourc...
plt.figure(3) plt.plot(wavelengths,tarp_48_reflectance-tarp_48_data[:,1]) plt.title('CHEQ 20160912 48% tarp absolute difference') plt.xlabel('Wavelength (nm)'); plt.ylabel('Absolute Refelctance Difference (%)') plt.savefig('CHEQ_20160912_48_tarp_absolute_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad...
code/Python/uncertainty/hyperspectral-validation.ipynb
NEONInc/NEON-Data-Skills
gpl-2.0
From this we are able to see that the 48% tarp actually has larger absolute differences than the 3% tarp. The 48% tarp performs poorly at the shortest and longest waveleghts as well as near the edges of the 'bad band windows.' This is related to difficulty in calibrating the sensor in these sensitive areas (source 1). ...
plt.figure(5) plt.plot(wavelengths,100*np.divide(tarp_48_reflectance-tarp_48_data[:,1],tarp_48_data[:,1])) plt.title('CHEQ 20160912 48% tarp percent difference') plt.xlabel('Wavelength (nm)'); plt.ylabel('Percent Refelctance Difference') plt.ylim((-100,100)) plt.savefig('CHEQ_20160912_48_tarp_relative_diff.png',dpi=30...
code/Python/uncertainty/hyperspectral-validation.ipynb
NEONInc/NEON-Data-Skills
gpl-2.0
Nomenclature: 1. Donation - is a charitable contribution 2. Contribution - is not a charitable contribution
def get_data(rows): ''' input: rows from dataframe for a specific donor output: money donated and contributed over the years ''' return rows\ .groupby(['activity_year', 'activity_month', 'is_service', 'state'])\ .agg({'amount': sum}).reset_index() df[(df.donor_id=='_1D50SWTKX') & (df.activi...
notebooks/42_ExtractDonor_Data.ipynb
smalladi78/SEF
unlicense
Plots! Plots! Plots!
donor_data = pd.read_pickle('out/42/donors.pkl') donations = df import locale color1 = '#67a9cf' color2 = '#fc8d59' colors = [color1, color2] _ = locale.setlocale(locale.LC_ALL, '') thousands_sep = lambda x: locale.format("%.2f", x, grouping=True)
notebooks/42_ExtractDonor_Data.ipynb
smalladi78/SEF
unlicense
Do people tend to give more money along the years? Calculate the amount donated in the Nth year of donation.
yearly_donors = donor_data[donor_data.is_service==True]\ .groupby(['year_of_donation', 'donor_id'])\ .amount.sum()\ .to_frame() yearly_donors.index = yearly_donors.index.droplevel(1) data = yearly_donors.reset_index().groupby('year_of_donation').amount.median().reset_index() data.columns = ['year_of_donation', 'amoun...
notebooks/42_ExtractDonor_Data.ipynb
smalladi78/SEF
unlicense
New donors vs repeat donors
data1 = donor_data[(donor_data.is_service==False)].groupby(['activity_year', 'is_repeat_year']).donor_id.nunique().unstack().fillna(0) data1 = pd.DataFrame(data1.values, columns=['New','Repeat'], index=np.sort(data1.index.unique())) data1 = data1.apply(lambda x: x/x.sum(), axis=1) data2 = donor_data[(donor_data.is_ser...
notebooks/42_ExtractDonor_Data.ipynb
smalladi78/SEF
unlicense
What proportion of money is coming in through various marketing channels
x = donations\ .groupby(['activity_year', 'channel']).amount.sum().to_frame().unstack().fillna(0) x.columns = x.columns.droplevel(0) x = x/1000000 plot = x.plot(kind='line', colormap=plt.cm.jet, fontsize=12, figsize=(12,18), ) #plt.legend().set_visible(False) plt.legend(...
notebooks/42_ExtractDonor_Data.ipynb
smalladi78/SEF
unlicense
Churn of donors
donor_data.head() def get_churn(year): return len(set( donor_data[(donor_data.activity_year==year) & (donor_data.is_service==False)].donor_id.unique())\ .difference(set(donor_data[(donor_data.activity_year>year) & (donor_data.is_service==False)].donor_id.unique()))) churn = pd.Series( [-get_churn(year...
notebooks/42_ExtractDonor_Data.ipynb
smalladi78/SEF
unlicense
How do fund-raisers impact donation dollars?
from itertools import cycle def plot_event_donation_activity(state, years): ymdata = np.cumsum( donations[(donations.state==state)].groupby(['activity_year','activity_month'])['amount', ]\ .sum()\ .unstack()\ .fillna(0), axis=1, dtype='int64') state_events = ev...
notebooks/42_ExtractDonor_Data.ipynb
smalladi78/SEF
unlicense
Data on urban bias Our earlier analysis of the Harris_Todaro migration model suggested that policies designed to favor certain sectors or labor groups Let's search for indicators (and their identification codes) relating to GDP per capita and urban population share. We could look these up in a book or from the websit...
wb.search('gdp.*capita.*const')[['id','name']]
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
We will use NY.GDP.PCAP.KD for GDP per capita (constant 2010 US$). You can also first browse and search for data series from the World Bank's DataBank page at http://databank.worldbank.org/data/. Then find the 'id' for the series that you are interested in in the 'metadata' section from the webpage Now let's look for ...
wb.search('Urban Population')[['id','name']].tail()
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
Let's use the ones we like but use a python dictionary to rename these to shorter variable names when we load the data into a python dataframe:
indicators = ['NY.GDP.PCAP.KD', 'SP.URB.TOTL.IN.ZS']
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
Since we are interested in exploring the extent of 'urban bias' in some countries, let's load data from 1980 which was toward the end of the era of import-substituting industrialization when urban-biased policies were claimed to be most pronounced.
dat = wb.download(indicator=indicators, country = 'all', start=1980, end=1980) dat.columns
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
Let's rename the columns to something shorter and then plot and regress log gdp per capita against urban extent we get a pretty tight fit:
dat.columns = [['gdppc', 'urbpct']] dat['lngpc'] = np.log(dat.gdppc) g = sns.jointplot("lngpc", "urbpct", data=dat, kind="reg", color ="b", size=7)
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
That is a pretty tight fit: urbanization rises with income per-capita, but there are several middle income country outliersthat have considerably higher urbanization than would be predicted. Let's look at the regression line.
mod = smf.ols("urbpct ~ lngpc", dat).fit() print(mod.summary())
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
Now let's just look at a list of countries sorted by the size of their residuals in this regression line. Countries with the largest residuals had urbanization in excess of what the model predicts from their 1980 level of income per capita. Here is the sorted list of top 15 outliers.
mod.resid.sort_values(ascending=False).head(15)
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
This is of course only suggestive but (leaving aside the island states like Singapore and Hong-Kong) the list is dominated by southern cone countries such as Chile, Argentina and Peru which in addition to having legacies of heavy political centralization also pursued ISI policies in the 60s and 70s that many would asso...
countries = ['CHL', 'USA', 'ARG'] start, end = dt.datetime(1950, 1, 1), dt.datetime(2016, 1, 1) dat = wb.download( indicator=indicators, country = countries, start=start, end=end).dropna()
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
Lets use shorter column names
dat.columns dat.columns = [['gdppc', 'urb']] dat.head()
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
Notice this has a two-level multi-index. The outer level is named 'country' and the inner level is 'year' We can pull out group data for a single country like this using the .xs or cross section method.
dat.xs('Chile',level='country').head(3)
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
(Note we could have also used dat.loc['Chile'].head()) And we can pull a 'year' level cross section like this:
dat.xs('2007', level='year').head()
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
Note that what was returned was a dataframe with the data just for our selected country. We can in turn further specify what column(s) from this we want:
dat.loc['Chile']['gdppc'].head()
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
Unstack data The unstack method turns index values into column names while stack method converts column names to index values. Here we apply unstack.
datyr = dat.unstack(level='country') datyr.head()
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
We can now easily index a 2015 cross-section of GDP per capita like so:
datyr.xs('1962')['gdppc']
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
We'd get same result from datyr.loc['2015']['gdppc'] We can also easily plot all countries:
datyr['urb'].plot(kind='line');
notebooks/DataAPIs.ipynb
jhconning/Dev-II
bsd-3-clause
Set basic parameters:
b = -5.4846 #aquifer thickness in m Q = 1199.218 #constant discharge in m^3/d r = 251.1552 #distance between observation well to test well in m rw = 0.1524 #screen radius of test well in m
pumpingtest_benchmarks/4_test_of_gridley.ipynb
mbakker7/ttim
mit
Load dataset:
data1 = np.loadtxt('data/gridley_well_1.txt') t1 = data1[:, 0] h1 = data1[:, 1] data2 = np.loadtxt('data/gridley_well_3.txt') t2 = data2[:, 0] h2 = data2[:, 1]
pumpingtest_benchmarks/4_test_of_gridley.ipynb
mbakker7/ttim
mit
Create conceptual model:
ml = ModelMaq(kaq=10, z=[0, b], Saq=0.001, tmin=0.001, tmax=1, topboundary='conf') w = Well(ml, xw=0, yw=0, rw=rw, tsandQ=[(0, Q)], layers=0) ml.solve()
pumpingtest_benchmarks/4_test_of_gridley.ipynb
mbakker7/ttim
mit
Calibrate with two datasets simultaneously:
#unknown parameters: kaq, Saq ca_0 = Calibrate(ml) ca_0.set_parameter(name='kaq0', initial=10) ca_0.set_parameter(name='Saq0', initial=1e-4) ca_0.series(name='obs1', x=r, y=0, t=t1, h=h1, layer=0) ca_0.fit(report=True) display(ca_0.parameters) print('rmse:', ca_0.rmse()) hm_0 = ml.head(r, 0, t1) plt.figure(figsize = ...
pumpingtest_benchmarks/4_test_of_gridley.ipynb
mbakker7/ttim
mit
Calibrate with two datasets simultaneously:
ml_1 = ModelMaq(kaq=10, z=[0, b], Saq=0.001, tmin=0.001, tmax=1, topboundary='conf') w_1 = Well(ml_1, xw=0, yw=0, rw=rw, tsandQ=[(0, Q)], layers=0) ml_1.solve() ca_2 = Calibrate(ml_1) ca_2.set_parameter(name='kaq0', initial=10) ca_2.set_parameter(name='Saq0', initial=1e-4, pmin=0) ca_2.series(name='obs1', x=r, y=0, t=...
pumpingtest_benchmarks/4_test_of_gridley.ipynb
mbakker7/ttim
mit
Ty adding well skin resistance and wellbore storage:
ml_2 = ModelMaq(kaq=10, z=[0, b], Saq=0.001, tmin=0.001, tmax=1, topboundary='conf') w_2 = Well(ml_2, xw=0, yw=0, rw=rw, rc=0.2, res=0.2, tsandQ=[(0, Q)], layers=0) ml_2.solve()
pumpingtest_benchmarks/4_test_of_gridley.ipynb
mbakker7/ttim
mit
If adding wellbore sotrage to the parameters to be optimized, the fit gives extremely large values of each parameter which is imposiible. However, when remove rc from well function, the fit cannot be completed with uncertainties. Thus, the rc value is determined as 0.2 by trial-and-error procedure.
ca_3 = Calibrate(ml_2) ca_3.set_parameter(name = 'kaq0', initial = 10) ca_3.set_parameter(name = 'Saq0', initial = 1e-4, pmin=0) ca_3.set_parameter_by_reference(name='res', parameter=w_2.res, initial =0.2) ca_3.series(name='obs1', x=r, y=0, t=t1, h=h1, layer=0) ca_3.series(name='obs3', x=0, y=0, t=t2, h=h2, layer=0) ca...
pumpingtest_benchmarks/4_test_of_gridley.ipynb
mbakker7/ttim
mit
Summary of values simulated by AQTESOLV and MLU The results simulated by different methods with two datasets simultaneously are presented below. In the example of AQTESOLV, result simulated with only observation well is presented. The comparision of results when only observation well is included can be found in the rep...
t = pd.DataFrame(columns=['k [m/d]', 'Ss [1/m]', 'res'], \ index=['MLU', 'AQTESOLV', 'ttim', 'ttim-res&rc']) t.loc['MLU'] = [38.094, 1.193E-06, '-'] t.loc['AQTESOLV'] = [37.803, 1.356E-06, '-'] t.loc['ttim'] = np.append(ca_2.parameters['optimal'].values, '-') t.loc['ttim-res&rc'] = ca_3.parameters['opti...
pumpingtest_benchmarks/4_test_of_gridley.ipynb
mbakker7/ttim
mit
Compute MNE inverse solution on evoked data in a mixed source space Create a mixed source space and compute an MNE inverse solution on an evoked dataset.
# Author: Annalisa Pascarella <a.pascarella@iac.cnr.it> # # License: BSD (3-clause) import os.path as op import matplotlib.pyplot as plt from nilearn import plotting import mne from mne.minimum_norm import make_inverse_operator, apply_inverse # Set dir data_path = mne.datasets.sample.data_path() subject = 'sample' ...
0.21/_downloads/f094864c4eeae2b4353a90789dd18b2b/plot_mixed_source_space_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set up our source space List substructures we are interested in. We select only the sub structures we want to include in the source space:
labels_vol = ['Left-Amygdala', 'Left-Thalamus-Proper', 'Left-Cerebellum-Cortex', 'Brain-Stem', 'Right-Amygdala', 'Right-Thalamus-Proper', 'Right-Cerebellum-Cortex']
0.21/_downloads/f094864c4eeae2b4353a90789dd18b2b/plot_mixed_source_space_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Get a surface-based source space, here with few source points for speed in this demonstration, in general you should use oct6 spacing!
src = mne.setup_source_space(subject, spacing='oct5', add_dist=False, subjects_dir=subjects_dir)
0.21/_downloads/f094864c4eeae2b4353a90789dd18b2b/plot_mixed_source_space_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now we create a mixed src space by adding the volume regions specified in the list labels_vol. First, read the aseg file and the source space bounds using the inner skull surface (here using 10mm spacing to save time, we recommend something smaller like 5.0 in actual analyses):
vol_src = mne.setup_volume_source_space( subject, mri=fname_aseg, pos=10.0, bem=fname_model, volume_label=labels_vol, subjects_dir=subjects_dir, add_interpolator=False, # just for speed, usually this should be True verbose=True) # Generate the mixed source space src += vol_src # Visualize the source ...
0.21/_downloads/f094864c4eeae2b4353a90789dd18b2b/plot_mixed_source_space_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Viewing the source space We could write the mixed source space with:: write_source_spaces(fname_mixed_src, src, overwrite=True) We can also export source positions to nifti file and visualize it again:
nii_fname = op.join(bem_dir, '%s-mixed-src.nii' % subject) src.export_volume(nii_fname, mri_resolution=True, overwrite=True) plotting.plot_img(nii_fname, cmap='nipy_spectral')
0.21/_downloads/f094864c4eeae2b4353a90789dd18b2b/plot_mixed_source_space_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute the fwd matrix
fwd = mne.make_forward_solution( fname_evoked, fname_trans, src, fname_bem, mindist=5.0, # ignore sources<=5mm from innerskull meg=True, eeg=False, n_jobs=1) leadfield = fwd['sol']['data'] print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape) src_fwd = fwd['src'] n = sum(src_fwd[i]['nuse'] ...
0.21/_downloads/f094864c4eeae2b4353a90789dd18b2b/plot_mixed_source_space_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute inverse solution
snr = 3.0 # use smaller SNR for raw data inv_method = 'dSPM' # sLORETA, MNE, dSPM parc = 'aparc' # the parcellation to use, e.g., 'aparc' 'aparc.a2009s' loose = dict(surface=0.2, volume=1.) lambda2 = 1.0 / snr ** 2 inverse_operator = make_inverse_operator( evoked.info, fwd, noise_cov, depth=None...
0.21/_downloads/f094864c4eeae2b4353a90789dd18b2b/plot_mixed_source_space_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot the mixed source estimate
initial_time = 0.1 stc_vec = apply_inverse(evoked, inverse_operator, lambda2, inv_method, pick_ori='vector') brain = stc_vec.plot( hemi='both', src=inverse_operator['src'], views='coronal', initial_time=initial_time, subjects_dir=subjects_dir)
0.21/_downloads/f094864c4eeae2b4353a90789dd18b2b/plot_mixed_source_space_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot the surface
brain = stc.surface().plot(initial_time=initial_time, subjects_dir=subjects_dir)
0.21/_downloads/f094864c4eeae2b4353a90789dd18b2b/plot_mixed_source_space_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot the volume
fig = stc.volume().plot(initial_time=initial_time, src=src, subjects_dir=subjects_dir)
0.21/_downloads/f094864c4eeae2b4353a90789dd18b2b/plot_mixed_source_space_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Process labels Average the source estimates within each label of the cortical parcellation and each sub structure contained in the src space
# Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi labels_parc = mne.read_labels_from_annot( subject, parc=parc, subjects_dir=subjects_dir) label_ts = mne.extract_label_time_course( [stc], labels_parc, src, mode='mean', allow_empty=True) # plot the times series of 2 labels fig, axes...
0.21/_downloads/f094864c4eeae2b4353a90789dd18b2b/plot_mixed_source_space_inverse.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Cuando estudiamos el experimento de Young, asumimos que iluminábamos con radiación monocromática. En este caso, la posición de los máximos y mínimos de irradiancia venían dados por, <div class="alert alert-error"> Máximos de irradiancia. $\delta = 2 m \pi \implies \frac{a x}{D} = m \implies$ $$x_{max} = \frac{m \lam...
import matplotlib.pyplot as plt import numpy as np %matplotlib inline plt.style.use('fivethirtyeight') #import ipywidgets as widg #from IPython.display import display ##### #PARÁMETROS QUE SE PUEDEN MODIFICAR ##### Lambda = 5e-7 c0 = 3e8 omega = 2*np.pi*c0/Lambda T = 2*np.pi/omega tau = 2*T ########### time = np.lin...
Experimento de Young/.ipynb_checkpoints/Trenes de Onda-checkpoint.ipynb
ecabreragranado/OpticaFisicaII
gpl-3.0
Longitud de coherencia El tiempo en el que la fase de la onda permanece constante (tiempo entre saltos consecutivos) se llama tiempo de coherencia y nosotros lo denominaremos $t_c$. Si observamos el tren de ondas espacialmente, veremos una figura similar a la anterior, es decir, una figura sinusoidal con un periodo ig...
import matplotlib.pyplot as plt import numpy as np %matplotlib inline plt.style.use('fivethirtyeight') import ipywidgets as widg from IPython.display import display ##### #PARÁMETROS QUE SE PUEDEN MODIFICAR ##### Lambda = 5e-7 c0 = 3e8 omega = 2*np.pi*c0/Lambda T = 2*np.pi/omega time = np.linspace(0,30*T,1500) tau = 2...
Experimento de Young/.ipynb_checkpoints/Trenes de Onda-checkpoint.ipynb
ecabreragranado/OpticaFisicaII
gpl-3.0
¿Qué ocurre si iluminamos el experimento de Young con este tipo de radiación? Si iluminamos una doble rendija con un tren de ondas como el representado anteriormente, tendremos dos ondas llegando a un cierto punto de la pantalla con la misma evolución temporal pero una de ellas retrasada con respecto a la otra. Esto es...
from matplotlib.pyplot import * from numpy import * %matplotlib inline style.use('fivethirtyeight') ##### #PARÁMETROS QUE SE PUEDEN MODIFICAR ##### Lambda = 5e-7 # longitud de onda de la radiación de 500 nm k = 2.0*pi/Lambda D = 3.5# en metros a = 0.003 # separación entre fuentes de 3 mm DeltaLambda = 7e-8 # anchura e...
Experimento de Young/.ipynb_checkpoints/Trenes de Onda-checkpoint.ipynb
ecabreragranado/OpticaFisicaII
gpl-3.0
Next, we need to instruct the AnyBody Modelling System to load the and run the model. We do this using AnyScript macro commands. These are short commands that can automate operations in the AnyBody Modeling System (AMS). Operation that are normally done by pointing and clicking in the AMS graphical user interface. You...
macrolist = [ 'load "Knee.any"', 'operation Main.MyStudy.Kinematics', 'run', ] app.start_macro(macrolist);
docs/Tutorial/01_Getting_started_with_anypytools.ipynb
AnyBody-Research-Group/AnyPyTools
mit
Running multiple macros It is easy to run multiple macros by adding an extra set of macro commands to the macro list.
macrolist = [ ['load "Knee.any"', 'operation Main.MyStudy.Kinematics', 'run'], ['load "Knee.any"', 'operation Main.MyStudy.InverseDynamics', 'run'], ] app.start_macro(macrolist);
docs/Tutorial/01_Getting_started_with_anypytools.ipynb
AnyBody-Research-Group/AnyPyTools
mit
Parallel execution Notice that AnyPyProcess will run the anyscript macros in parallel. Modern computers have multiple cores, but a single AnyBody instance can only utilize a single core, leaving us with a great potential for speeding things up through parallelization. To test this, let us create ten macros in a for-loo...
macrolist = [] for i in range(40): macro = [ 'load "Knee.any"', 'operation Main.MyStudy.InverseDynamics', 'run', ] macrolist.append(macro)
docs/Tutorial/01_Getting_started_with_anypytools.ipynb
AnyBody-Research-Group/AnyPyTools
mit
AnyPyProcess has a parameter 'num_processes' that controls the number of parallel processes. Let us try a small example to see the difference in speed:
# First sequentially app = AnyPyProcess(num_processes = 1) app.start_macro(macrolist); # Then with parallization app = AnyPyProcess(num_processes = 4) app.start_macro(macrolist);
docs/Tutorial/01_Getting_started_with_anypytools.ipynb
AnyBody-Research-Group/AnyPyTools
mit
Note: In general you should not user a num_processes larger than the number of cores in your computer. Getting data from the AnyBody Model In the following macro, we have added a new class operation to 'Dump' the result of the maximum muscle activity. The start_macro method will return all the dumped variables:
import numpy as np macrolist = [ 'load "Knee.any"', 'operation Main.MyStudy.InverseDynamics', 'run', 'classoperation Main.MyStudy.Output.MaxMuscleActivity "Dump"', ] results = app.start_macro(macrolist)
docs/Tutorial/01_Getting_started_with_anypytools.ipynb
AnyBody-Research-Group/AnyPyTools
mit
We can export more variables by adding more classoperation. But there is a better way of doing this, as we shall see in the next tutorials. Finally, to make a plot we import the matplotlib library, and enable inline figures.
max_muscle_act = results[0]['Main.MyStudy.Output.MaxMuscleActivity'] import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.plot(max_muscle_act);
docs/Tutorial/01_Getting_started_with_anypytools.ipynb
AnyBody-Research-Group/AnyPyTools
mit
Import calibration data. The calibration data should be acquired by first running commands "G28" and then "G29 V3" on the printer. To increase the amount of points, you can set AUTO_BED_LEVELING_GRID_POINTS to some larger value. I used 25.
in_data = """ < 21:10:41: Bed X: 0.000 Y: -70.000 Z: 1.370 < 21:10:42: Bed X: 25.000 Y: -65.000 Z: 1.280 < 21:10:42: Bed X: 20.000 Y: -65.000 Z: 1.390 < 21:10:43: Bed X: 15.000 Y: -65.000 Z: 1.430 < 21:10:43: Bed X: 10.000 Y: -65.000 Z: 1.500 < 21:10:44: Bed X: 5.000 Y: -65.000 Z: 1.530 < 21:10:44: Bed X: 0.000 Y: -65....
delta_calibration.ipynb
mairas/delta_calibration
mit
Format the raw data as x, y, z vectors.
lines = in_data.strip().splitlines() x = [] y = [] z = [] for line in lines: cols = line.split() x.append(float(cols[4])) y.append(float(cols[6])) z.append(float(cols[8])) x = np.array(x) y = np.array(y) z = np.array(z)
delta_calibration.ipynb
mairas/delta_calibration
mit
Set x and y axis offsets (X_PROBE_OFFSET_FROM_EXTRUDER and Y_PROBE_OFFSET_FROM_EXTRUDER)
# note that the offset values actually aren't used in delta leveling, but # we want to use them to improve calibration accuracy x_offset = 0. y_offset = -17.5 x = x + x_offset y = y + y_offset # limit the radius of the analysis to bed centre d = np.sqrt(x**2 + y**2) x = x[d < 50] y = y[d < 50] z = z[d < 50]
delta_calibration.ipynb
mairas/delta_calibration
mit
z values acquired:
z
delta_calibration.ipynb
mairas/delta_calibration
mit
Plane calibration First, let's level the bed. Optimally, you should adjust physical bed leveling, but we can also do it in the software. Note that this is precisely what the "G29" command does. Plot the raw values. Nice!
fig = plt.figure() ax = fig.gca(projection='3d') surf = ax.plot_trisurf(x, y, z, cmap=cm.coolwarm, linewidth=0)
delta_calibration.ipynb
mairas/delta_calibration
mit
Solve (OK, we're lazy - optimize) the plane equation.
def f(k_x, k_y, b): return z + k_x * x + k_y * y + b def fopt(k): k_x, k_y, b = k return f(k_x, k_y, b) res = leastsq(fopt, [0, 0, 0]) res k_x, k_y, b = res[0]
delta_calibration.ipynb
mairas/delta_calibration
mit
Once we have flattened the print bed, this is the residual curve:
z2 = f(k_x, k_y, b) fig = plt.figure() ax = fig.gca(projection='3d') surf = ax.plot_trisurf(x, y, z2, cmap=cm.coolwarm, linewidth=0)
delta_calibration.ipynb
mairas/delta_calibration
mit
Cool, eh? Verify that the center point (highest or lowest point of the dome or the cup) is really close to 0, 0 - if not, adjust the offset values. In my example, the curve has a distinct saddle-like shape. This is most likely due to some skew in printer frame geometry - I'm still finishing the printer construction and...
# Known variables # DELTA_DIAGONAL_ROD L = 210.0 # DELTA_RADIUS DR = 105.3
delta_calibration.ipynb
mairas/delta_calibration
mit
Inverse kinematics equations:
def inv_kin(L, DR, x, y): Avx = 0 Avy = DR Bvx = DR * np.cos(30.0 / (2*np.pi)) Bvy = -DR * np.sin(30.0 / (2*np.pi)) Cvx = -DR * np.cos(30.0 / (2*np.pi)) Cvy = -DR * np.sin(30.0 / (2*np.pi)) Acz = np.sqrt(L**2 - (x - Avx)**2 - (y - Avy)**2) Bcz = np.sqrt(L**2 - (x - Bvx)**2 - (y - Bvy)...
delta_calibration.ipynb
mairas/delta_calibration
mit
For true L and DR, these return the zero height locations. For our imperfect dimensions, we need to calculate the actual x,y,z coordinates with forward kinematics. Since we're lazy, we solve the forward kinematic equations numerically from the inverse ones.
def fwd_kin_scalar(L, DR, Az, Bz, Cz): # now, solve the inv_kin equation: # Acz, Bcz, Ccz = inv_kin(L, DR, x, y, z) # for x, y, z def fopt(x_): x, y, z = x_ Aczg, Bczg, Cczg = inv_kin(L, DR, x, y) #print("F: ", Aczg, Bczg, Cczg) return [Aczg+z-Az, Bczg+z-Bz, Ccz...
delta_calibration.ipynb
mairas/delta_calibration
mit
Test fwd_kin correctness
def test_fwd_kin_reciprocity(): xs = np.linspace(-100, 70, 51) ys = np.linspace(60, -50, 51) zs = np.linspace(0, 50, 51) for i, (x, y, z) in enumerate(zip(xs, ys, zs)): Acz, Bcz, Ccz = inv_kin(L, DR, x, y) xi, yi, zi = fwd_kin(L, DR, Acz+z, Bcz+z, Ccz+z) assert np.abs(x-xi) < 1e-...
delta_calibration.ipynb
mairas/delta_calibration
mit
Now for the beef. Define the error incurred by incorrect delta radius value.
def Z_err(e): # idealized inverse kinematics (this is what the firmware calculates) Acz, Bcz, Ccz = inv_kin(L, DR, x, y) # and this is where the extruder ends up in real world x_e, y_e, z_e = fwd_kin(L, DR+e, Acz, Bcz, Ccz) return z_e
delta_calibration.ipynb
mairas/delta_calibration
mit
Define the optimization function.
def fopt(x): e, = x zerr = Z_err(e) # ignore any constant offset zerr = zerr - np.mean(zerr) return np.sum((z2-zerr)**2)
delta_calibration.ipynb
mairas/delta_calibration
mit
Run the optimization. This might take a few moments.
res2 = minimize(fopt, np.array([-2]), method='COBYLA', options={"disp": True}) res2 e, = res2.x e zerr = Z_err(e) zerr = zerr - np.mean(zerr) fig = plt.figure() ax = fig.gca(projection='3d') surf = ax.plot_trisurf(x, y, z2-zerr, cmap=cm.coolwarm, linewidth=0)
delta_calibration.ipynb
mairas/delta_calibration
mit
Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.
brownian(1.0,1000) assert isinstance(t, np.ndarray) assert isinstance(W, np.ndarray) assert t.dtype==np.dtype(float) assert W.dtype==np.dtype(float) assert len(t)==len(W)==1000
assignments/assignment03/NumpyEx03.ipynb
sthuggins/phys202-2015-work
mit
Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.
plt.plot(brownian(1.0,1000)) plt.xlabel('t') plt.ylabel('W(t)') assert True # this is for grading
assignments/assignment03/NumpyEx03.ipynb
sthuggins/phys202-2015-work
mit
Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.
np.diff(brownian(1.0,1000)) np.diff(brownian(1.0,1000)).mean() np.diff(brownian(1.0,1000)).std() assert len(dW)==len(W)-1 assert dW.dtype==np.dtype(float)
assignments/assignment03/NumpyEx03.ipynb
sthuggins/phys202-2015-work
mit