markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Step 5: Calibration For the calibration of our groundwater model we proceed by creating a calibration object with the Calibrate class. The Calibrate object takes the model ml as argument. We then set the parameters we are adjusting: - Hydraulic conductivity: kaq0 (Hydraulic conductivity of layer 0) - Specific Storage S...
ca1 = Calibrate(ml) # Calibrate object ca1.set_parameter(name = 'kaq0', initial=10) # Setting parameters ca1.set_parameter(name = 'Saq0', initial=1e-4) ca1.series(name = 'obs1', x=r1, y=0, t=t1, h=h1, layer=0) # Adding observations
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
The fit method is used to run the least-squares algorithmn for finding the optimal parameter values:
ca1.fit(report = True) # Fitting the model. We can hide the message below setting report = False
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
The optimal parameters and their related fit statistics are saved inside the Calibrate object as a DataFrame in the .parameters attribute:
ca1.parameters
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
The calibration RMSE can be accessed with the .rmse method:
print('rmse:', ca1.rmse())
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
Finally, we can access the model drawdowns by asking for the calibrated model to compute the heads at the well location and time intervals specified by the sampled data. For this we use the .head method in the model object, in our case, ml. The arguments are: * the positions x and y of the piezometric well (or any othe...
hm1 = ml.head(x = r1, y = 0, t = t1) #Using the head method to calculate model resuts hm1.shape #Demonstration of the output shape
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
Plotting the model Results
#matplotlib plot for calibration, plt.figure(figsize=(10, 7)) plt.semilogx(t1, h1, '.', label='obs at 30 m') #Plotting the observed drawdown plt.semilogx(t1, hm1[0], label='ttim at 30 m') #Simulated drawdown plt.xlabel('time [d]') plt.ylabel('drawdown [m]') plt.title('ttim analysis for Oude Korendijk - Piezometer 30 m...
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
Step 5.2. Calibrate model Parameters with Observation Well 2 (90 m distance) We proceed to calibrate using only the data from observation well 2. This time we will rush foward through the calibration steps. If the user feels confused, one can go back and check the inputs in step 5.1
ca2 = Calibrate(ml) ca2.set_parameter(name='kaq0', initial=10) ca2.set_parameter(name='Saq0', initial=1e-4) ca2.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=0) ca2.fit(report=True) ca2.parameters print('rmse:', ca2.rmse()) hm2 = ml.head(r2, 0, t2) plt.figure(figsize=(10, 7)) plt.semilogx(t2, h2, '.', label='obs at...
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
Step 5.3. Calibrate model with two datasets simultaneously Here we explore the hability of TTim to calibrate the model using more than one observation location. This can be done simply by calling the method .series multiple times to the Calibrate object:
ca = Calibrate(ml) ca.set_parameter(name='kaq0', initial=10) ca.set_parameter(name='Saq0', initial=1e-4) ca.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=0) # Adding well 1 ca.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=0) # Adding well 2 ca.fit(report=True) ca.parameters print('rmse:', ca.rmse()) hs1 = ml.hea...
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
Step 6. Calibrate Model with Wellbore Storage In this continuation, we investigate whether adding well bore storage improves the fit. Step 6.1. Reload the model
#unknown parameters: kaq, Saq and rc ml1 = ModelMaq(kaq=60, z=[zt, zb], Saq=1e-4, tmin=1e-5, tmax=1)
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
Step 6.2. Define new Well object with wellbore storage Now, besides the parameters explained in Step 3, we have to add the radius of the caisson (rc)
w1 = Well(ml1, xw=0, yw=0, rw=0.2, rc=0.3, tsandQ=[(0, Q)], layers=0) ml1.solve(silent='True')
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
Step 6.3. Calibrate using only the data from observation well 1 Here we use the method .set_parameter_by_reference to calibrate the rc parameter in our well. .set_parameter_by_reference takes the following arguments: * name: string of the parameter name * parameter: numpy-array with the parameter to be optimized. It sh...
ca3 = Calibrate(ml1) ca3.set_parameter(name='kaq0', initial=10) ca3.set_parameter(name='Saq0', initial=1e-4) #ca3.set_parameter_by_reference(name='rc', parameter=w1.rc[0:], initial=0.2, pmin=0.01) ca3.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=0) ca3.fit(report=True) ca3.parameters print('rmse:', ca3.rmse()) hm3...
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
Step 6.4. Calibrate using only the data from observation well 2 Here we repeat the step 6.3 for well 2
ca4 = Calibrate(ml1) ca4.set_parameter(name='kaq0', initial=10) ca4.set_parameter(name='Saq0', initial=1e-4) #ca4.set_parameter_by_reference(name='rc', parameter=w1.rc[0:], initial=0.2, pmin=0.01) ca4.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=0) ca4.fit(report=True) ca4.parameters print('rmse:', ca4.rmse()) hm4...
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
Step 6.5. Calibrate model with two datasets simultaneously Following the same logic from steps 6.3 to 6.4 and the calibration from step 5.3 we can now check the calibration using both wells and including wellbore storage.
ca0 = Calibrate(ml1) ca0.set_parameter(name='kaq0', initial=10) ca0.set_parameter(name='Saq0', initial=1e-4) #ca0.set_parameter_by_reference(name='rc', parameter=w1.rc[0:], initial=0.2, pmin=0.01) ca0.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=0) ca0.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=0) ca0.fit(rep...
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
Step 7. Comparison of Results Step 7.1. Comparison of model performance and Results with and without wellbore storage 7.1.1. RMSE of the two conceptual models The following table summarises the rmse values of the obtained models with and without well wellbore storage consideration.
t0 = pd.DataFrame(columns=['obs 30 m', 'obs 90 m', 'obs simultaneously'], index=['without rc', 'with rc']) t0.loc['without rc', 'obs 30 m'] = ca1.rmse() t0.loc['without rc', 'obs 90 m'] = ca2.rmse() t0.loc['without rc', 'obs simultaneously'] = ca.rmse() t0.loc['with rc', 'obs 30 m'] = ca3.rmse() t0.loc['with rc', 'obs ...
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
Adding wellbore storage improved the fit performance when used drawdown data of the individual observation wells. However, when calibrated the model with both datasets simultaneously, rc was adjusted to the minimum value. Adding rc did not improve the performance much in this case. 7.1.2. Model comparisons We can see ...
# Preparing the DataFrame: t1 = pd.DataFrame(columns=['kaq - opt', 'kaq - min', 'kaq - max', 'W. Storage', 'Calib. Dataset']) w_storage = ['without rc','without rc','without rc','with rc','with rc','with rc',] obs_dataset = ['obs 30 m','obs 90 m','obs simultaneously','obs 30 m','obs 90 m','obs simultaneously'] # Loop...
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
The Errorbar plot shows that the Hydraulic Conductivities calculated are significantly higher with wellbore storage than without when considering the individual wells datasets for calibration. As for the dataset using both obs wells at the same time, the calibration results have no significant differences. Both scenari...
t = pd.DataFrame(columns=['k [m/d]', 'Ss [1/m]', 'RMSE'], \ index=['K&dR', 'TTim', 'AQTESOLV', 'MLU']) t.loc['TTim'] = np.append(ca.parameters['optimal'].values, ca.rmse()) t.loc['AQTESOLV'] = [66.086, 2.541e-05, 0.05006] t.loc['MLU'] = [66.850, 2.400e-05, 0.05083] t.loc['K&dR'] = [55.71429, 1.7E-4, '-...
pumpingtests_new/confined1_oude_korendijk.ipynb
mbakker7/ttim
mit
1. Linear multi-fidelity model The linear multi-fidelity model proposed in [Kennedy and O'Hagan, 2000] is widely viewed as a reference point for all such models. In this model, the high-fidelity (true) function is modeled as a scaled sum of the low-fidelity function plus an error term: $$ f_{high}(x) = f_{err}(x) + \rh...
import GPy import emukit.multi_fidelity import emukit.test_functions from emukit.model_wrappers.gpy_model_wrappers import GPyMultiOutputWrapper from emukit.multi_fidelity.models import GPyLinearMultiFidelityModel ## Generate samples from the Forrester function high_fidelity = emukit.test_functions.forrester.forrester...
notebooks/Emukit-tutorial-multi-fidelity.ipynb
EmuKit/emukit
apache-2.0
The inputs to the models are expected to take the form of ndarrays where the last column indicates the fidelity of the observed points. Although only the input points, $X$, are augmented with the fidelity level, the observed outputs $Y$ must also be converted to array form. For example, a dataset consisting of 3 low-fi...
## Convert lists of arrays to ndarrays augmented with fidelity indicators from emukit.multi_fidelity.convert_lists_to_array import convert_x_list_to_array, convert_xy_lists_to_arrays X_train, Y_train = convert_xy_lists_to_arrays([x_train_l, x_train_h], [y_train_l, y_train_h]) ## Plot the original functions plt.figu...
notebooks/Emukit-tutorial-multi-fidelity.ipynb
EmuKit/emukit
apache-2.0
Observe that in the example above we restrict our observations to 12 from the lower fidelity function and only 6 from the high fidelity function. As we shall demonstrate further below, fitting a standard GP model to the few high fidelity observations is unlikely to result in an acceptable fit, which is why we shall ins...
## Construct a linear multi-fidelity model kernels = [GPy.kern.RBF(1), GPy.kern.RBF(1)] lin_mf_kernel = emukit.multi_fidelity.kernels.LinearMultiFidelityKernel(kernels) gpy_lin_mf_model = GPyLinearMultiFidelityModel(X_train, Y_train, lin_mf_kernel, n_fidelities=2) gpy_lin_mf_model.mixed_noise.Gaussian_noise.fix(0) gpy...
notebooks/Emukit-tutorial-multi-fidelity.ipynb
EmuKit/emukit
apache-2.0
The above plot demonstrates how the multi-fidelity model learns the relationship between the low and high-fidelity observations in order to model both of the corresponding functions. In this example, the posterior mean almost fits the true function exactly, while the associated uncertainty returned by the model is also...
## Create standard GP model using only high-fidelity data kernel = GPy.kern.RBF(1) high_gp_model = GPy.models.GPRegression(x_train_h, y_train_h, kernel) high_gp_model.Gaussian_noise.fix(0) ## Fit the GP model high_gp_model.optimize_restarts(5) ## Compute mean predictions and associated variance hf_mean_high_gp_mod...
notebooks/Emukit-tutorial-multi-fidelity.ipynb
EmuKit/emukit
apache-2.0
2. Nonlinear multi-fidelity model Although the model described above works well when the mapping between the low and high-fidelity functions is linear, several issues may be encountered when this is not the case. Consider the following example, where the low and high fidelity functions are defined as follows: $$ f_{low...
## Generate data for nonlinear example high_fidelity = emukit.test_functions.non_linear_sin.nonlinear_sin_high low_fidelity = emukit.test_functions.non_linear_sin.nonlinear_sin_low x_plot = np.linspace(0, 1, 200)[:, None] y_plot_l = low_fidelity(x_plot) y_plot_h = high_fidelity(x_plot) n_low_fidelity_points = 50 n_h...
notebooks/Emukit-tutorial-multi-fidelity.ipynb
EmuKit/emukit
apache-2.0
In this case, the mapping between the two functions is nonlinear, as can be observed by plotting the high fidelity observations as a function of the lower fidelity observations.
plt.figure(figsize=(12,8)) plt.ylabel('HF(x)') plt.xlabel('LF(x)') plt.plot(y_plot_l, y_plot_h, color=colors['purple']) plt.title('Mapping from low fidelity to high fidelity') plt.legend(['HF-LF Correlation'], loc='lower center');
notebooks/Emukit-tutorial-multi-fidelity.ipynb
EmuKit/emukit
apache-2.0
2.1 Failure of linear multi-fidelity model Below we fit the linear multi-fidelity model to this new problem and plot the results.
## Construct a linear multi-fidelity model kernels = [GPy.kern.RBF(1), GPy.kern.RBF(1)] lin_mf_kernel = emukit.multi_fidelity.kernels.LinearMultiFidelityKernel(kernels) gpy_lin_mf_model = GPyLinearMultiFidelityModel(X_train, Y_train, lin_mf_kernel, n_fidelities=2) gpy_lin_mf_model.mixed_noise.Gaussian_noise.fix(0) gpy...
notebooks/Emukit-tutorial-multi-fidelity.ipynb
EmuKit/emukit
apache-2.0
As expected, the linear multi-fidelity model was unable to capture the nonlinear relationship between the low and high-fidelity data. Consequently, the resulting fit of the true function is also poor. 2.2 Nonlinear Multi-fidelity model In view of the deficiencies of the linear multi-fidelity model, a nonlinear multi-fi...
## Create nonlinear model from emukit.multi_fidelity.models.non_linear_multi_fidelity_model import make_non_linear_kernels, NonLinearMultiFidelityModel base_kernel = GPy.kern.RBF kernels = make_non_linear_kernels(base_kernel, 2, X_train.shape[1] - 1) nonlin_mf_model = NonLinearMultiFidelityModel(X_train, Y_train, n_f...
notebooks/Emukit-tutorial-multi-fidelity.ipynb
EmuKit/emukit
apache-2.0
Fitting the nonlinear fidelity model to the available data very closely fits the high-fidelity function while also fitting the low-fidelity function exactly. This is a vast improvement over the results obtained using the linear model. We can also confirm that the model is properly capturing the correlation between the ...
plt.figure(figsize=(12,8)) plt.ylabel('HF(x)') plt.xlabel('LF(x)') plt.plot(y_plot_l, y_plot_h, '-', color=colors['purple']) plt.plot(lf_mean_nonlin_mf_model, hf_mean_nonlin_mf_model, 'k--') plt.legend(['True HF-LF Correlation', 'Learned HF-LF Correlation'], loc='lower center') plt.title('Mapping from low fidelity to h...
notebooks/Emukit-tutorial-multi-fidelity.ipynb
EmuKit/emukit
apache-2.0
If you supply a custom ConnectionPool that is supplied to several Redis instances, you may want to disconnect the connection pool explicitly. Disconnecting the connection pool simply disconnects all connections hosted in the pool.
import redis.asyncio as redis connection = redis.Redis(auto_close_connection_pool=False) await connection.close() # Or: await connection.close(close_connection_pool=False) await connection.connection_pool.disconnect()
docs/examples/asyncio_examples.ipynb
mozillazg/redis-py-doc
mit
Transactions (Multi/Exec) The aioredis.Redis.pipeline will return a aioredis.Pipeline object, which will buffer all commands in-memory and compile them into batches using the Redis Bulk String protocol. Additionally, each command will return the Pipeline instance, allowing you to chain your commands, i.e., p.set('foo',...
import redis.asyncio as redis r = await redis.from_url("redis://localhost") async with r.pipeline(transaction=True) as pipe: ok1, ok2 = await (pipe.set("key1", "value1").set("key2", "value2").execute()) assert ok1 assert ok2
docs/examples/asyncio_examples.ipynb
mozillazg/redis-py-doc
mit
Pub/Sub Mode Subscribing to specific channels:
import asyncio import async_timeout import redis.asyncio as redis STOPWORD = "STOP" async def reader(channel: redis.client.PubSub): while True: try: async with async_timeout.timeout(1): message = await channel.get_message(ignore_subscribe_messages=True) if me...
docs/examples/asyncio_examples.ipynb
mozillazg/redis-py-doc
mit
Subscribing to channels matching a glob-style pattern:
import asyncio import async_timeout import redis.asyncio as redis STOPWORD = "STOP" async def reader(channel: redis.client.PubSub): while True: try: async with async_timeout.timeout(1): message = await channel.get_message(ignore_subscribe_messages=True) if me...
docs/examples/asyncio_examples.ipynb
mozillazg/redis-py-doc
mit
Sentinel Client The Sentinel client requires a list of Redis Sentinel addresses to connect to and start discovering services. Calling aioredis.sentinel.Sentinel.master_for or aioredis.sentinel.Sentinel.slave_for methods will return Redis clients connected to specified services monitored by Sentinel. Sentinel client wil...
import asyncio from redis.asyncio.sentinel import Sentinel sentinel = Sentinel([("localhost", 26379), ("sentinel2", 26379)]) r = sentinel.master_for("mymaster") ok = await r.set("key", "value") assert ok val = await r.get("key") assert val == b"value"
docs/examples/asyncio_examples.ipynb
mozillazg/redis-py-doc
mit
To avoid created another graph container:
fig = plt.figure(num=None, facecolor='white', figsize=(12,6)) ax = plt.subplot(1, 1, 1, aspect='equal', axisbg='white') c = Corrplot(df) c.plot(ax=ax)
_unittests/ut_ipythonhelper/data/example_corrplot.ipynb
sdpython/pyquickhelper
mit
Understanding the task by understanding the data What is always the first step in tackling a new machine learning problem? You are absolutely right: to get a sense of the data. The better you understand the data, the better you understand the problem you are trying to solve. The first thing to realize is that the 'drug...
target = [d['drug'] for d in data] target
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Then remove the 'drug' entry from all the dictionaries:
[d.pop('drug') for d in data];
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Sweet! Now let's look at the data:
import matplotlib.pyplot as plt %matplotlib inline plt.style.use('ggplot') age = [d['age'] for d in data] age sodium = [d['Na'] for d in data] potassium = [d['K'] for d in data] plt.figure(figsize=(10, 6)) plt.scatter(sodium, potassium) plt.xlabel('sodium') plt.ylabel('potassium')
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
But, what we really want is to color the data points according to their target labels:
target = [ord(t) - 65 for t in target] target plt.figure(figsize=(14, 10)) plt.subplot(221) plt.scatter([d['Na'] for d in data], [d['K'] for d in data], c=target, s=100) plt.xlabel('sodium (Na)') plt.ylabel('potassium (K)') plt.subplot(222) plt.scatter([d['age'] for d in data], [d['K'] for d in data], ...
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Preprocessing the data We need to convert all categorical features into numerical features:
from sklearn.feature_extraction import DictVectorizer vec = DictVectorizer(sparse=False) data_pre = vec.fit_transform(data) vec.get_feature_names() data_pre[0]
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Convert to 32-bit floating point numbers in order to make OpenCV happy:
import numpy as np data_pre = np.array(data_pre, dtype=np.float32) target = np.array(target, dtype=np.float32).reshape((-1, 1)) data_pre.shape, target.shape
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Then split data into training and test sets:
import sklearn.model_selection as ms X_train, X_test, y_train, y_test = ms.train_test_split( data_pre, target, test_size=5, random_state=42 )
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Building the decision tree Building the decision tree with OpenCV works in much the same way as in Chapter 3, First Steps in Supervised Learning. Recall that all machine learning function reside in OpenCV 3.1's ml module. You can create an empty decision tree using the following code:
import cv2 dtree = cv2.ml.DTrees_create()
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Then train the model: Note: It appears the DTrees object in OpenCV 3.1 is broken (segmentation fault). As a result, calling the train method will lead to "The kernel has died unexpectedly." There's a bug report here.
# dtree.train(X_train, cv2.ml.ROW_SAMPLE, y_train)
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Predict some values:
# y_pred = dtree.predict(X_test)
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Calculate the score on the training and test sets:
# from sklearn import metrics # metrics.accuracy_score(y_test, dtree.predict(X_test)) # metrics.accuracy_score(y_train, dtree.predict(X_train))
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Visualizing a trained decision tree OpenCV's implementation of decision trees is good enough if you are just starting out, and don't care too much what's going on under the hood. However, in the following sections we will switch to Scikit-Learn. Their implementation allows us to customize the algorithm and makes it a l...
from sklearn import tree dtc = tree.DecisionTreeClassifier()
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
The model is trained by calling fit:
dtc.fit(X_train, y_train) dtc.score(X_train, y_train) dtc.score(X_test, y_test)
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Now, here's the cool thing: If you want to know what the tree looks like, you can do so using GraphViz to create a PDF file (or any other supported file type) from the tree structure. For this to work, you need to install GraphViz first, which you can do from the command line using conda: $ conda install graphviz
with open("tree.dot", 'w') as f: f = tree.export_graphviz(dtc, out_file=f, feature_names=vec.get_feature_names(), class_names=['A', 'B', 'C', 'D'])
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Then, back on the command line, you can use GraphViz to turn "tree.dot" into (for example) a PNG file: $ dot -Tpng tree.dot -o tree.png Rating the importance of features Scikit-Learn provides a function to rate feature importance, which is a number between 0 and 1 for each feature, where 0 means "not used at all in an...
dtc.feature_importances_
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
If we remind ourselves of the feature names, it will become clear which features seem to be the most important. A plot might be most informative:
plt.figure(figsize=(12, 6)) plt.barh(range(10), dtc.feature_importances_, align='center', tick_label=vec.get_feature_names())
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Understanding decision rules Two of the most commonly used criteria for making decisions are the following: criterion='gini': The Gini impurity is a measure of misclassification, with the aim of minimizing the probability of misclassification. criterion='entropy': In information theory, entropy is a measure of the ...
dtce = tree.DecisionTreeClassifier(criterion='entropy') dtce.fit(X_train, y_train) dtce.score(X_train, y_train) dtce.score(X_test, y_test) with open("tree.dot", 'w') as f: f = tree.export_graphviz(dtce, out_file=f, feature_names=vec.get_feature_names(), cl...
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Controlling the complexity of decision trees There are two common ways to avoid overfitting: pre-pruning: This is the process of stopping the creation of the tree early. post-pruning (or just pruning): This is the process of first building the tree but then removing or collapsing nodes that contain only little info...
dtc0 = tree.DecisionTreeClassifier(criterion='entropy', max_leaf_nodes=6) dtc0.fit(X_train, y_train) dtc0.score(X_train, y_train) dtc0.score(X_test, y_test)
notebooks/05.01-Building-Our-First-Decision-Tree.ipynb
mbeyeler/opencv-machine-learning
mit
Load Data
# Create feature matrix with: # Feature 0: 80% class 0 # Feature 1: 80% class 1 # Feature 2: 60% class 0, 40% class 1 X = [[0, 1, 0], [0, 1, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0]]
machine-learning/variance_thresholding_binary_features.ipynb
tpin3694/tpin3694.github.io
mit
Conduct Variance Thresholding In binary features (i.e. Bernoulli random variables), variance is calculated as: $$\operatorname {Var} (x)= p(1-p)$$ where $p$ is the proportion of observations of class 1. Therefore, by setting $p$, we can remove features where the vast majority of observations are one class.
# Run threshold by variance thresholder = VarianceThreshold(threshold=(.75 * (1 - .75))) thresholder.fit_transform(X)
machine-learning/variance_thresholding_binary_features.ipynb
tpin3694/tpin3694.github.io
mit
Identifying data types will be helpful later on, as having conflicting types can lead to messy data science. 'Arrays' in Python Data can also be stored, arranged, and organized in ways that lend itself to a lot of great analysis. Here are the types of data one might work with here. Note: the term 'immutable' means tha...
house = ['Targaryen','Stark','Lannister','Tyrell','Tully','Aaryn','Martell','Baratheon','Greyjoy']
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Let's do some super-basic data exploration. What happens if you type in house[5]?
house[5]
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Yep, you get the sixth item in the list. A common standard in most coding languages, lists are automatically indexed upon creation, starting at 0. This will be helpful to know when you're trying to look for certain items in a list by order - they will be at the nth - 1 index. Functions and Methods - Let's do more with ...
def words_of_stark(): return "Winter is coming!" words_of_stark() def shipping(x,y): return x + " is now romantically involved with " + y + ". Hope they're not related!" shipping("Jon Snow","Danaerys Targaryen") # Sorry, spoiler alert.
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Well, that was fun, but I don't want to have to re-invent the wheel. Fortunately the Python community has developed a lot of rich libraries full of classes for us to work with so that we don't have to constant define them. We access them by importing. Importing Libraries We're going to be working with two in particula...
import pandas as pd # We use this shortened syntax to type less later on import matplotlib.pyplot as plt # Specifically, we're using the PyPlot class and again, a shortened syntax %matplotlib inline # This handy command above allows us to see our visualizations instantly, if written correctly
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Awesome! Now we're ready to start working with the dataset. About This Dataset You're About to Import I originally found this while searching for fun Game of Thrones-based data, and I found one by Chris Albon, a major contributor to the data science community. I felt it was perfect teach some of the basics in Python fo...
raw_dataframe = pd.read_csv("war_of_the_five_kings_dataset.csv")
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
We've now created an object called raw_dataframe that contains all of the data from the csv file, converted into a Pandas-based dataframe. This will allow us to do a lot of great exploratory things. Basic Exploratory Data Analysis Let's take a look in our data by using the .head() method, which allows us to see the top...
raw_dataframe.head(3) # This should show 3 rows, starting at index 0.
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
You now can catch a glimpse of what's in this data set! Wait a minute... What happens to the data after column defender_1? There's an ... ellipsis? This dataset is actually very wide. Pandas is doing us a favor by ignoring some of those columns. Let's make them visible by changing a small feature of our Pandas import:
pd.set_option('display.max_columns', None)
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
This little bit of code now makes it so that the display class of our Pandas library has no limit to the number of columns it sends. Try running the .head() method again to see if it worked.
raw_dataframe.head(3)
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Great! We can now see all of the columns of data! This little bit of code is handy for customizing your libraries in the future. Let's do one more key bit of code, drawing back to our very first lesson:
raw_dataframe.info()
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Here's another way for us to look at our data and see what types exit within them, indexed by key. So far we know that there are 38 data points overall, and some of those points are written as integers, others as float objects. These are all default data assignments by Pandas, but as we dive deeper, we'll care a little...
df = raw_dataframe.copy()
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
How many battles did Robb Stark fight in? A quick glance in the dataset shows that Robb Stark fought in battles as both an attacker and as a defender. We'll need to create sub-dataframes that isolate those key_values:
robb_off = df[df['attacker_king'] == 'Robb Stark'] robb_def = df[df['defender_king'] == 'Robb Stark']
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Whoa, that looks a little complex. Let's break it down. We've created two objects: robb_off for whenever Robb Stark attacked (i.e. on the "offensive), and robb_def for whenever Robb Stark defended. We're requesting in Python to set these objects equal to the dataframe dictionary for whenever the key of attacker_king a...
robb_total = len(robb_off) + len(robb_def) robb_total
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
In other words, Robb Stark was involved in nearly 2/3 of all the battles fought during the War of the Five Kings. But how good of a war commander was he? How many battles did Robb Stark win as an attacker? We can build upon the objects we've already built. We have the object for the number of battles Robb fought as an ...
robb_off_win = robb_off[robb_off['attacker_outcome'] == 'win']
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Using the same strategy as before, we're now looking into the sub-dataframe robb_off for whenever the key attacker_outcome has a value win. From there, it's a simple len() method.
len(robb_off_win)
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Cool! Robb Stark won 8 of the battles he fought as an attacker. What about all the battles he won, including the ones as a defender? We apply the same method, but remember - victories are according to the attacker's perspective. We need times when the attacker has lost to add to Robb's scoreboard.
robb_def_win = robb_def[robb_def['attacker_outcome'] == 'loss']
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Adding these two variables together gets you the number of overall victories:
len(robb_off_win + robb_def_win)
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
.... Wait, only 9? Out of the total number of battles Robb Stark fought, he was successful as a attacker but not great on the defensive. Overall, winning 9 out of 24 battles is really not that impressive. Perhaps 'The Young Wolf' wasn't as impressive as we thought... Try answering some more questions: What was the ave...
robb_off_viz = robb_off.groupby('attacker_outcome').apply(len)
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Now, we can create a simple bar graph with the code below and setting the y label with a few more methods.
robb_off_viz.plot(kind='bar').set_ylabel('# of Battles')
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Let's compare that with a plot for Robb Stark's defense. Remember, in this graph, Robb is the defender, so his "wins" are in the "loss" column below.
robb_def_viz = robb_def.groupby('attacker_outcome').apply(len) robb_def_viz.plot(kind='bar').set_ylabel('# of Battles')
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
We can interpret this data much easier now with these visuals in a couple of ways: Attacking is a far more effective means to victory than defending Robb Stark is about as good at attacking as he is terrible at defending Cool. Though looking at just two bars is a little lame. Let's compare some more data! The code be...
attacker_win = df[df['attacker_outcome'] == 'win'].groupby('attacker_1').apply(len) attacker_win.plot(kind='bar').set_ylabel('# of Victories')
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
We'll that's interesting. Turns out that the Greyjoys and the Lannisters are more effective on the attack than the Starks. Let's move onto another popular form of visualization: scatterplots. Scatterploting in Python I was told by a good friend that scatterplots are the best way to visualize data. Let's see if he's rig...
x = robb_off['attacker_size'] y = robb_off['defender_size'] plt.scatter(x,y,color='red')
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Now let's do the same thing with the battles where Robb Stark is defending and make that the color blue.
x = robb_def['defender_size'] y = robb_def['attacker_size'] plt.scatter(x,y,color='blue')
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Hm, also interesting, but hard to see how it fits together. We have to run the code in a single Python block to make them all into one cool visual. I've seen added some code to make a "line of equality" to give us a sense of what the Starks were truly up against.
x = robb_off['attacker_size'] y = robb_off['defender_size'] plt.scatter(x,y,color='red') x = robb_def['defender_size'] y = robb_def['attacker_size'] plt.scatter(x,y,color='blue') plt.title('When Starks Attack') plt.xlabel('Stark Army') plt.ylabel('Defender Army') plt.xlim(0,21000) # x parameters - aim for homogeneous...
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Check it out! Especially with the "line of equality," we can make the following conclusions: The Starks typically faced opponents who had a larger army size than their own. Can we make conclusions about army size AND victory? Not yet. We'll need to redo the data for that, but you should know everything you need to ge...
df.hist()
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Hm, that's not all that meaningful to us as a group, but we can focus on one key, such as year.
df.hist('year')
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
A little more helpful! Here we can see that many of the battles (20) happened in the year 299, with 7 happening the year before and 11 in the year after. (We can use Python to count exactly what these numbers are. Let's also check out how big the attacking armies were by exploring attacker_size.
df.hist('attacker_size')
basic_python_data_science_ice_fire.ipynb
lee-ngo/dataset-ice-fire
mit
Inference of the dispersion of a Gaussian with Gaussian data errors Suppose we have data draw from a delta function with Gaussian uncertainties (all equal). How well do we limit the dispersion? Sample data:
ndata= 24 data= numpy.random.normal(size=ndata)
inference/Gaussian-Dispersion-Inference-Errors.ipynb
jobovy/misc-notebooks
bsd-3-clause
We assume that the mean is zero and implement the likelihood
def loglike(sigma,data): if sigma <= 0. or sigma > 2.: return -1000000000000000. return -numpy.sum(0.5*numpy.log(1.+sigma**2.)+0.5*data**2./(1.+sigma**2.))
inference/Gaussian-Dispersion-Inference-Errors.ipynb
jobovy/misc-notebooks
bsd-3-clause
Now sample with slice sampling
nsamples= 10000 samples= bovy_mcmc.slice(numpy.array([1.]),1.,loglike,(data,), isDomainFinite=[True,True],domain=[0.,2.], nsamples=nsamples) hist(numpy.array(samples), range=[0.,2.],bins=0.3*numpy.sqrt(nsamples), histtype='step',color='k',normed=True) x95= sort...
inference/Gaussian-Dispersion-Inference-Errors.ipynb
jobovy/misc-notebooks
bsd-3-clause
Dependence on $N$ We write a function that returns the 95% upper limit as a function of sample size $N$
def uplimit(N,ntrials=30,nsamples=1000): out= [] for ii in range(ntrials): data= numpy.random.normal(size=N) samples= bovy_mcmc.slice(numpy.array([1.]),1./N**0.25,loglike,(data,), isDomainFinite=[True,True],domain=[0.,2.], nsamples=nsamples...
inference/Gaussian-Dispersion-Inference-Errors.ipynb
jobovy/misc-notebooks
bsd-3-clause
If we visualize the graph with exec, we can see where the parallelization actually takes place.
# Visualize the detailed graph from IPython.display import Image wf.write_graph(graph2use='exec', format='png', simple_form=True) Image(filename='/output/smoothflow/graph_detailed.dot.png')
nipype_tutorial/notebooks/basic_iteration.ipynb
kdestasio/online_brain_intensive
gpl-2.0
Now, let's visualize the results!
%pylab inline from nilearn import plotting plotting.plot_anat( '/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz', title='original', display_mode='z', cut_coords=(-50, -35, -20, -5), annotate=False) plotting.plot_anat( '/output/smoothflow/skullstrip/sub-01_ses-test_T1w_brain.nii.gz', title='sk...
nipype_tutorial/notebooks/basic_iteration.ipynb
kdestasio/online_brain_intensive
gpl-2.0
IdentityInterface (special use case of iterabels) A special use case of iterables is the IdentityInterface. The IdentityInterface interface allows you to create Nodes that simple identity mapping, i.e. Nodes that only work on parameters/strings. For example, let's say you want to run a preprocessing workflow over 5 sub...
# First, let's specify the list of input variables subject_list = ['sub-01', 'sub-02', 'sub-03', 'sub-04', 'sub-05'] session_list = ['run-01', 'run-02'] fwhm_widths = [4, 8]
nipype_tutorial/notebooks/basic_iteration.ipynb
kdestasio/online_brain_intensive
gpl-2.0
United Airlines and American Airlines
start = '07-01-2015' end = '07-01-2017' united = quandl.get('WIKI/UAL', start_date = start, end_date = end) american = quandl.get('WIKI/AAL', start_date = start, end_date = end) united.head() american.head() american['Adj. Close'].p...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/03-First-Trading-Algorithm.ipynb
arcyfelix/Courses
apache-2.0
Spread and Correlation
np.corrcoef(american['Adj. Close'], united['Adj. Close']) spread = american['Adj. Close'] - united['Adj. Close'] spread.plot(label='Spread', figsize = (12,8)) plt.axhline(spread.mean(), c = 'r') plt.legend()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/03-First-Trading-Algorithm.ipynb
arcyfelix/Courses
apache-2.0
Normalizing with a z-score
def zscore(stocks): return (stocks - stocks.mean()) / np.std(stocks) zscore(spread).plot(figsize = (14,8)) plt.axhline(zscore(spread).mean(), color = 'black') plt.axhline(1.0, c = 'r', ls = '--') plt.axhline(-1.0, c = 'g', ls = '--') plt.legend(['Spread z-score', 'Mean', '+1', '-1']);
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/03-First-Trading-Algorithm.ipynb
arcyfelix/Courses
apache-2.0
Rolling Z-Score Our spread is currently American-United. Let's decide how to calculate this on a rolling basis for our use in Quantopian
#1 day moving average of the price spread spread_mavg1 = spread.rolling(1).mean() # 30 day moving average of the price spread spread_mavg30 = spread.rolling(30).mean() # Take a rolling 30 day standard deviation std_30 = spread.rolling(30).std() # Compute the z score for each day zscore_30_1 = (spread_mavg1 - spread_...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/03-First-Trading-Algorithm.ipynb
arcyfelix/Courses
apache-2.0
Implementation of Strategy WARNING: YOU SHOULD NOT ACTUALLY TRADE WITH THIS!
import numpy as np def initialize(context): """ Called once at the start of the algorithm. """ # Every day we check the pair status schedule_function(check_pairs, date_rules.every_day(), time_rules.market_close(minutes = 60)) # Our Two Airlines context.aa = sid(45971) #aal ...
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/10-Quantopian-Platform/03-First-Trading-Algorithm.ipynb
arcyfelix/Courses
apache-2.0
Pick an example to test if load.cc works
# -- inputs X_test[0] # -- predicted output (using Keras) yhat[0]
TF/DummyNet.ipynb
mickypaganini/IPNN
mit
Inspect the protobuf containing the model's architecture and logic
from tensorflow.core.framework import graph_pb2 # -- read in the graph f = open("models/graph.pb", "rb") graph_def = graph_pb2.GraphDef() graph_def.ParseFromString(f.read()) import tensorflow as tf # -- actually import the graph described by graph_def tf.import_graph_def(graph_def, name = '') for node in graph_def.n...
TF/DummyNet.ipynb
mickypaganini/IPNN
mit
Reading a file in Pandas Reading a CSV file is really easy in Pandas. There are several formats that Pandas can deal with. |Format Type|Data Description|Reader|Writer| |---|---|---|---| |text|CSV|read_csv|to_csv| |text|JSON|read_json|to_json| |text|HTML|read_html|to_html| |text|Local clipboard|read_clipboard|to_clipboa...
family=pd.read_csv(RAW_DATA_DIR+'/familyxx.csv') persons=pd.read_csv(RAW_DATA_DIR+'/personsx.csv') samadult=pd.read_csv(RAW_DATA_DIR+'/samadult.csv') househld=pd.read_csv(RAW_DATA_DIR+'/househld.csv') househld
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
A bit about Series and DataFrames Pandas uses two main data structures: - Series - DataFrames Series are lists of elements with an index. The index can be simple (0,1,2,3,...) or complex (01-Jan-17, 02-Jan-17, ...) DataFrames can be seen as a dictionary of Series. This means you can access a column by name.
persons_in_household=househld['ACPT_PER']
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
You can index the result like you would index a list
print(persons_in_household[:5])
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
But the type is a Series
print(type(persons_in_household))
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
and that means it has additional attributes, like name, and summary stats
print(persons_in_household.name) print(persons_in_household.describe())
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
Selecting multiple columns If you select multiple columns, the result again is a DataFrame
accepted=househld[['ACPT_PER','ACPTCHLD']] print(type(accepted)) print(accepted.head())
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
Basic Stats and Plotting Pandas has a lot of built-in statistical functionality and plotting features
accepted.describe() accepted.plot(kind='box')
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
Looking at the data by Region We can have a look at the data by region. This shows another very simple way of plotting data.
househld['REGION'].plot(kind='hist')
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit
What we see is that the number of responses is not equally divided. But the data also has a weight for the household that allows us to extrapolate to the overal US population. We will use Group By to add up the weights.
by_region=househld[['REGION','WTFA_HH']].groupby('REGION').sum() by_region
notebooks/Explore_Files.ipynb
gsentveld/lunch_and_learn
mit